00:00:00.001 Started by upstream project "autotest-per-patch" build number 127197 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.110 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:05.971 The recommended git tool is: git 00:00:05.971 using credential 00000000-0000-0000-0000-000000000002 00:00:05.974 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:05.987 Fetching changes from the remote Git repository 00:00:05.990 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:06.001 Using shallow fetch with depth 1 00:00:06.001 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:06.001 > git --version # timeout=10 00:00:06.013 > git --version # 'git version 2.39.2' 00:00:06.013 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:06.024 Setting http proxy: proxy-dmz.intel.com:911 00:00:06.024 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:12.145 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:12.157 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:12.169 Checking out Revision 4313f32deecbb7108199ebd1913b403a3005dece (FETCH_HEAD) 00:00:12.169 > git config core.sparsecheckout # timeout=10 00:00:12.178 > git read-tree -mu HEAD # timeout=10 00:00:12.197 > git checkout -f 4313f32deecbb7108199ebd1913b403a3005dece # timeout=5 00:00:12.216 Commit message: "packer: Add bios builder" 00:00:12.216 > git rev-list --no-walk 4313f32deecbb7108199ebd1913b403a3005dece # timeout=10 00:00:12.315 [Pipeline] Start of Pipeline 00:00:12.327 [Pipeline] library 00:00:12.328 Loading library shm_lib@master 00:00:12.328 Library shm_lib@master is cached. Copying from home. 00:00:12.342 [Pipeline] node 00:00:12.351 Running on VM-host-SM9 in /var/jenkins/workspace/nvme-vg-autotest_3 00:00:12.353 [Pipeline] { 00:00:12.360 [Pipeline] catchError 00:00:12.362 [Pipeline] { 00:00:12.371 [Pipeline] wrap 00:00:12.377 [Pipeline] { 00:00:12.383 [Pipeline] stage 00:00:12.385 [Pipeline] { (Prologue) 00:00:12.397 [Pipeline] echo 00:00:12.399 Node: VM-host-SM9 00:00:12.404 [Pipeline] cleanWs 00:00:12.412 [WS-CLEANUP] Deleting project workspace... 00:00:12.412 [WS-CLEANUP] Deferred wipeout is used... 00:00:12.419 [WS-CLEANUP] done 00:00:12.564 [Pipeline] setCustomBuildProperty 00:00:12.614 [Pipeline] httpRequest 00:00:12.637 [Pipeline] echo 00:00:12.639 Sorcerer 10.211.164.101 is alive 00:00:12.646 [Pipeline] httpRequest 00:00:12.649 HttpMethod: GET 00:00:12.650 URL: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:12.650 Sending request to url: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:12.658 Response Code: HTTP/1.1 200 OK 00:00:12.658 Success: Status code 200 is in the accepted range: 200,404 00:00:12.659 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_3/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:19.904 [Pipeline] sh 00:00:20.184 + tar --no-same-owner -xf jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:20.200 [Pipeline] httpRequest 00:00:20.216 [Pipeline] echo 00:00:20.217 Sorcerer 10.211.164.101 is alive 00:00:20.223 [Pipeline] httpRequest 00:00:20.226 HttpMethod: GET 00:00:20.226 URL: http://10.211.164.101/packages/spdk_764779691715b9a4ebeee9b53dc81b2d87e4a4b5.tar.gz 00:00:20.226 Sending request to url: http://10.211.164.101/packages/spdk_764779691715b9a4ebeee9b53dc81b2d87e4a4b5.tar.gz 00:00:20.252 Response Code: HTTP/1.1 200 OK 00:00:20.252 Success: Status code 200 is in the accepted range: 200,404 00:00:20.253 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_3/spdk_764779691715b9a4ebeee9b53dc81b2d87e4a4b5.tar.gz 00:03:27.114 [Pipeline] sh 00:03:27.393 + tar --no-same-owner -xf spdk_764779691715b9a4ebeee9b53dc81b2d87e4a4b5.tar.gz 00:03:30.724 [Pipeline] sh 00:03:31.005 + git -C spdk log --oneline -n5 00:03:31.005 764779691 bdev/compress: print error code information in load compress bdev 00:03:31.005 a7c420308 bdev/compress: release reduce vol resource when comp bdev fails to be created. 00:03:31.005 b8378f94e scripts/pkgdep: Set yum's skip_if_unavailable=True under rocky8 00:03:31.005 c2a77f51e module/bdev/nvme: add detach-monitor poller 00:03:31.005 e14876e17 lib/nvme: add spdk_nvme_scan_attached() 00:03:31.024 [Pipeline] writeFile 00:03:31.042 [Pipeline] sh 00:03:31.330 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:03:31.343 [Pipeline] sh 00:03:31.630 + cat autorun-spdk.conf 00:03:31.630 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:31.630 SPDK_TEST_NVME=1 00:03:31.630 SPDK_TEST_FTL=1 00:03:31.630 SPDK_TEST_ISAL=1 00:03:31.630 SPDK_RUN_ASAN=1 00:03:31.630 SPDK_RUN_UBSAN=1 00:03:31.630 SPDK_TEST_XNVME=1 00:03:31.630 SPDK_TEST_NVME_FDP=1 00:03:31.630 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:31.639 RUN_NIGHTLY=0 00:03:31.641 [Pipeline] } 00:03:31.659 [Pipeline] // stage 00:03:31.676 [Pipeline] stage 00:03:31.679 [Pipeline] { (Run VM) 00:03:31.694 [Pipeline] sh 00:03:31.978 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:03:31.979 + echo 'Start stage prepare_nvme.sh' 00:03:31.979 Start stage prepare_nvme.sh 00:03:31.979 + [[ -n 3 ]] 00:03:31.979 + disk_prefix=ex3 00:03:31.979 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_3 ]] 00:03:31.979 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf ]] 00:03:31.979 + source /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf 00:03:31.979 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:31.979 ++ SPDK_TEST_NVME=1 00:03:31.979 ++ SPDK_TEST_FTL=1 00:03:31.979 ++ SPDK_TEST_ISAL=1 00:03:31.979 ++ SPDK_RUN_ASAN=1 00:03:31.979 ++ SPDK_RUN_UBSAN=1 00:03:31.979 ++ SPDK_TEST_XNVME=1 00:03:31.979 ++ SPDK_TEST_NVME_FDP=1 00:03:31.979 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:31.979 ++ RUN_NIGHTLY=0 00:03:31.979 + cd /var/jenkins/workspace/nvme-vg-autotest_3 00:03:31.979 + nvme_files=() 00:03:31.979 + declare -A nvme_files 00:03:31.979 + backend_dir=/var/lib/libvirt/images/backends 00:03:31.979 + nvme_files['nvme.img']=5G 00:03:31.979 + nvme_files['nvme-cmb.img']=5G 00:03:31.979 + nvme_files['nvme-multi0.img']=4G 00:03:31.979 + nvme_files['nvme-multi1.img']=4G 00:03:31.979 + nvme_files['nvme-multi2.img']=4G 00:03:31.979 + nvme_files['nvme-openstack.img']=8G 00:03:31.979 + nvme_files['nvme-zns.img']=5G 00:03:31.979 + (( SPDK_TEST_NVME_PMR == 1 )) 00:03:31.979 + (( SPDK_TEST_FTL == 1 )) 00:03:31.979 + nvme_files["nvme-ftl.img"]=6G 00:03:31.979 + (( SPDK_TEST_NVME_FDP == 1 )) 00:03:31.979 + nvme_files["nvme-fdp.img"]=1G 00:03:31.979 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:03:31.979 + for nvme in "${!nvme_files[@]}" 00:03:31.979 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:03:31.979 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:03:31.979 + for nvme in "${!nvme_files[@]}" 00:03:31.979 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-ftl.img -s 6G 00:03:31.979 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:03:31.979 + for nvme in "${!nvme_files[@]}" 00:03:31.979 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:03:31.979 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:03:31.979 + for nvme in "${!nvme_files[@]}" 00:03:31.979 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:03:31.979 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:03:31.979 + for nvme in "${!nvme_files[@]}" 00:03:31.979 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:03:31.979 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:03:32.238 + for nvme in "${!nvme_files[@]}" 00:03:32.238 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:03:32.238 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:03:32.238 + for nvme in "${!nvme_files[@]}" 00:03:32.238 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:03:32.238 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:03:32.238 + for nvme in "${!nvme_files[@]}" 00:03:32.238 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-fdp.img -s 1G 00:03:32.238 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:03:32.238 + for nvme in "${!nvme_files[@]}" 00:03:32.238 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:03:32.238 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:03:32.238 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:03:32.238 + echo 'End stage prepare_nvme.sh' 00:03:32.238 End stage prepare_nvme.sh 00:03:32.250 [Pipeline] sh 00:03:32.531 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:03:32.531 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex3-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:03:32.531 00:03:32.531 DIR=/var/jenkins/workspace/nvme-vg-autotest_3/spdk/scripts/vagrant 00:03:32.531 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_3/spdk 00:03:32.531 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_3 00:03:32.531 HELP=0 00:03:32.531 DRY_RUN=0 00:03:32.531 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,/var/lib/libvirt/images/backends/ex3-nvme-fdp.img, 00:03:32.531 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:03:32.531 NVME_AUTO_CREATE=0 00:03:32.531 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,, 00:03:32.531 NVME_CMB=,,,, 00:03:32.531 NVME_PMR=,,,, 00:03:32.531 NVME_ZNS=,,,, 00:03:32.531 NVME_MS=true,,,, 00:03:32.531 NVME_FDP=,,,on, 00:03:32.531 SPDK_VAGRANT_DISTRO=fedora38 00:03:32.531 SPDK_VAGRANT_VMCPU=10 00:03:32.531 SPDK_VAGRANT_VMRAM=12288 00:03:32.531 SPDK_VAGRANT_PROVIDER=libvirt 00:03:32.531 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:03:32.531 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:03:32.531 SPDK_OPENSTACK_NETWORK=0 00:03:32.531 VAGRANT_PACKAGE_BOX=0 00:03:32.531 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_3/spdk/scripts/vagrant/Vagrantfile 00:03:32.531 FORCE_DISTRO=true 00:03:32.531 VAGRANT_BOX_VERSION= 00:03:32.531 EXTRA_VAGRANTFILES= 00:03:32.531 NIC_MODEL=e1000 00:03:32.531 00:03:32.531 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_3/fedora38-libvirt' 00:03:32.531 /var/jenkins/workspace/nvme-vg-autotest_3/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest_3 00:03:36.720 Bringing machine 'default' up with 'libvirt' provider... 00:03:36.978 ==> default: Creating image (snapshot of base box volume). 00:03:37.237 ==> default: Creating domain with the following settings... 00:03:37.237 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721964711_f89dc5d21e5081599782 00:03:37.237 ==> default: -- Domain type: kvm 00:03:37.237 ==> default: -- Cpus: 10 00:03:37.237 ==> default: -- Feature: acpi 00:03:37.237 ==> default: -- Feature: apic 00:03:37.237 ==> default: -- Feature: pae 00:03:37.237 ==> default: -- Memory: 12288M 00:03:37.237 ==> default: -- Memory Backing: hugepages: 00:03:37.237 ==> default: -- Management MAC: 00:03:37.237 ==> default: -- Loader: 00:03:37.237 ==> default: -- Nvram: 00:03:37.237 ==> default: -- Base box: spdk/fedora38 00:03:37.237 ==> default: -- Storage pool: default 00:03:37.237 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721964711_f89dc5d21e5081599782.img (20G) 00:03:37.237 ==> default: -- Volume Cache: default 00:03:37.237 ==> default: -- Kernel: 00:03:37.237 ==> default: -- Initrd: 00:03:37.237 ==> default: -- Graphics Type: vnc 00:03:37.237 ==> default: -- Graphics Port: -1 00:03:37.237 ==> default: -- Graphics IP: 127.0.0.1 00:03:37.237 ==> default: -- Graphics Password: Not defined 00:03:37.237 ==> default: -- Video Type: cirrus 00:03:37.237 ==> default: -- Video VRAM: 9216 00:03:37.237 ==> default: -- Sound Type: 00:03:37.237 ==> default: -- Keymap: en-us 00:03:37.237 ==> default: -- TPM Path: 00:03:37.237 ==> default: -- INPUT: type=mouse, bus=ps2 00:03:37.237 ==> default: -- Command line args: 00:03:37.237 ==> default: -> value=-device, 00:03:37.237 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:03:37.237 ==> default: -> value=-drive, 00:03:37.237 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:03:37.237 ==> default: -> value=-device, 00:03:37.237 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:03:37.237 ==> default: -> value=-device, 00:03:37.237 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:03:37.237 ==> default: -> value=-drive, 00:03:37.237 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-1-drive0, 00:03:37.237 ==> default: -> value=-device, 00:03:37.237 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:37.237 ==> default: -> value=-device, 00:03:37.237 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:03:37.237 ==> default: -> value=-drive, 00:03:37.237 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:03:37.237 ==> default: -> value=-device, 00:03:37.237 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:37.237 ==> default: -> value=-drive, 00:03:37.237 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:03:37.237 ==> default: -> value=-device, 00:03:37.237 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:37.237 ==> default: -> value=-drive, 00:03:37.237 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:03:37.237 ==> default: -> value=-device, 00:03:37.237 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:37.237 ==> default: -> value=-device, 00:03:37.237 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:03:37.237 ==> default: -> value=-device, 00:03:37.237 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:03:37.237 ==> default: -> value=-drive, 00:03:37.237 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:03:37.237 ==> default: -> value=-device, 00:03:37.237 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:37.237 ==> default: Creating shared folders metadata... 00:03:37.237 ==> default: Starting domain. 00:03:39.146 ==> default: Waiting for domain to get an IP address... 00:03:57.261 ==> default: Waiting for SSH to become available... 00:03:57.261 ==> default: Configuring and enabling network interfaces... 00:03:59.791 default: SSH address: 192.168.121.219:22 00:03:59.791 default: SSH username: vagrant 00:03:59.791 default: SSH auth method: private key 00:04:01.690 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_3/spdk/ => /home/vagrant/spdk_repo/spdk 00:04:09.793 ==> default: Mounting SSHFS shared folder... 00:04:10.359 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_3/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:04:10.359 ==> default: Checking Mount.. 00:04:11.756 ==> default: Folder Successfully Mounted! 00:04:11.756 ==> default: Running provisioner: file... 00:04:12.324 default: ~/.gitconfig => .gitconfig 00:04:12.583 00:04:12.583 SUCCESS! 00:04:12.583 00:04:12.583 cd to /var/jenkins/workspace/nvme-vg-autotest_3/fedora38-libvirt and type "vagrant ssh" to use. 00:04:12.583 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:04:12.583 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_3/fedora38-libvirt" to destroy all trace of vm. 00:04:12.583 00:04:12.592 [Pipeline] } 00:04:12.615 [Pipeline] // stage 00:04:12.626 [Pipeline] dir 00:04:12.627 Running in /var/jenkins/workspace/nvme-vg-autotest_3/fedora38-libvirt 00:04:12.629 [Pipeline] { 00:04:12.649 [Pipeline] catchError 00:04:12.651 [Pipeline] { 00:04:12.670 [Pipeline] sh 00:04:12.950 + vagrant ssh-config --host vagrant 00:04:12.950 + sed -ne /^Host/,$p 00:04:12.950 + tee ssh_conf 00:04:17.171 Host vagrant 00:04:17.171 HostName 192.168.121.219 00:04:17.171 User vagrant 00:04:17.171 Port 22 00:04:17.171 UserKnownHostsFile /dev/null 00:04:17.171 StrictHostKeyChecking no 00:04:17.171 PasswordAuthentication no 00:04:17.171 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:04:17.171 IdentitiesOnly yes 00:04:17.171 LogLevel FATAL 00:04:17.171 ForwardAgent yes 00:04:17.171 ForwardX11 yes 00:04:17.171 00:04:17.184 [Pipeline] withEnv 00:04:17.186 [Pipeline] { 00:04:17.203 [Pipeline] sh 00:04:17.482 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:04:17.482 source /etc/os-release 00:04:17.482 [[ -e /image.version ]] && img=$(< /image.version) 00:04:17.482 # Minimal, systemd-like check. 00:04:17.482 if [[ -e /.dockerenv ]]; then 00:04:17.482 # Clear garbage from the node's name: 00:04:17.482 # agt-er_autotest_547-896 -> autotest_547-896 00:04:17.482 # $HOSTNAME is the actual container id 00:04:17.482 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:04:17.482 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:04:17.482 # We can assume this is a mount from a host where container is running, 00:04:17.482 # so fetch its hostname to easily identify the target swarm worker. 00:04:17.482 container="$(< /etc/hostname) ($agent)" 00:04:17.482 else 00:04:17.482 # Fallback 00:04:17.482 container=$agent 00:04:17.482 fi 00:04:17.482 fi 00:04:17.482 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:04:17.482 00:04:17.493 [Pipeline] } 00:04:17.514 [Pipeline] // withEnv 00:04:17.520 [Pipeline] setCustomBuildProperty 00:04:17.534 [Pipeline] stage 00:04:17.536 [Pipeline] { (Tests) 00:04:17.552 [Pipeline] sh 00:04:17.827 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:04:17.839 [Pipeline] sh 00:04:18.113 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:04:18.126 [Pipeline] timeout 00:04:18.126 Timeout set to expire in 40 min 00:04:18.127 [Pipeline] { 00:04:18.141 [Pipeline] sh 00:04:18.413 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:04:18.978 HEAD is now at 764779691 bdev/compress: print error code information in load compress bdev 00:04:18.990 [Pipeline] sh 00:04:19.266 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:04:19.535 [Pipeline] sh 00:04:19.810 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:04:19.828 [Pipeline] sh 00:04:20.119 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:04:20.119 ++ readlink -f spdk_repo 00:04:20.119 + DIR_ROOT=/home/vagrant/spdk_repo 00:04:20.119 + [[ -n /home/vagrant/spdk_repo ]] 00:04:20.119 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:04:20.119 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:04:20.119 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:04:20.119 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:04:20.119 + [[ -d /home/vagrant/spdk_repo/output ]] 00:04:20.119 + [[ nvme-vg-autotest == pkgdep-* ]] 00:04:20.119 + cd /home/vagrant/spdk_repo 00:04:20.119 + source /etc/os-release 00:04:20.119 ++ NAME='Fedora Linux' 00:04:20.119 ++ VERSION='38 (Cloud Edition)' 00:04:20.119 ++ ID=fedora 00:04:20.119 ++ VERSION_ID=38 00:04:20.119 ++ VERSION_CODENAME= 00:04:20.119 ++ PLATFORM_ID=platform:f38 00:04:20.119 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:04:20.119 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:20.119 ++ LOGO=fedora-logo-icon 00:04:20.119 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:04:20.119 ++ HOME_URL=https://fedoraproject.org/ 00:04:20.119 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:04:20.119 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:20.119 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:20.119 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:20.119 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:04:20.119 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:20.119 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:04:20.119 ++ SUPPORT_END=2024-05-14 00:04:20.119 ++ VARIANT='Cloud Edition' 00:04:20.119 ++ VARIANT_ID=cloud 00:04:20.119 + uname -a 00:04:20.119 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:04:20.119 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:20.685 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:20.943 Hugepages 00:04:20.943 node hugesize free / total 00:04:20.943 node0 1048576kB 0 / 0 00:04:20.943 node0 2048kB 0 / 0 00:04:20.943 00:04:20.943 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:20.943 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:20.943 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:20.943 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:20.943 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:20.943 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:20.943 + rm -f /tmp/spdk-ld-path 00:04:20.943 + source autorun-spdk.conf 00:04:20.943 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:20.943 ++ SPDK_TEST_NVME=1 00:04:20.943 ++ SPDK_TEST_FTL=1 00:04:20.943 ++ SPDK_TEST_ISAL=1 00:04:20.943 ++ SPDK_RUN_ASAN=1 00:04:20.943 ++ SPDK_RUN_UBSAN=1 00:04:20.943 ++ SPDK_TEST_XNVME=1 00:04:20.943 ++ SPDK_TEST_NVME_FDP=1 00:04:20.943 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:20.943 ++ RUN_NIGHTLY=0 00:04:20.943 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:20.943 + [[ -n '' ]] 00:04:20.943 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:04:20.943 + for M in /var/spdk/build-*-manifest.txt 00:04:20.943 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:20.943 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:20.943 + for M in /var/spdk/build-*-manifest.txt 00:04:20.943 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:20.943 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:21.201 ++ uname 00:04:21.201 + [[ Linux == \L\i\n\u\x ]] 00:04:21.201 + sudo dmesg -T 00:04:21.201 + sudo dmesg --clear 00:04:21.201 + dmesg_pid=5202 00:04:21.201 + sudo dmesg -Tw 00:04:21.201 + [[ Fedora Linux == FreeBSD ]] 00:04:21.201 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:21.201 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:21.201 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:21.201 + [[ -x /usr/src/fio-static/fio ]] 00:04:21.201 + export FIO_BIN=/usr/src/fio-static/fio 00:04:21.201 + FIO_BIN=/usr/src/fio-static/fio 00:04:21.201 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:21.201 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:21.201 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:21.201 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:21.201 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:21.201 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:21.201 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:21.201 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:21.201 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:21.201 Test configuration: 00:04:21.201 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:21.201 SPDK_TEST_NVME=1 00:04:21.201 SPDK_TEST_FTL=1 00:04:21.201 SPDK_TEST_ISAL=1 00:04:21.201 SPDK_RUN_ASAN=1 00:04:21.201 SPDK_RUN_UBSAN=1 00:04:21.201 SPDK_TEST_XNVME=1 00:04:21.201 SPDK_TEST_NVME_FDP=1 00:04:21.201 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:21.201 RUN_NIGHTLY=0 03:32:35 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:21.201 03:32:35 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:21.201 03:32:35 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:21.202 03:32:35 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:21.202 03:32:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.202 03:32:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.202 03:32:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.202 03:32:35 -- paths/export.sh@5 -- $ export PATH 00:04:21.202 03:32:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.202 03:32:35 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:04:21.202 03:32:35 -- common/autobuild_common.sh@447 -- $ date +%s 00:04:21.202 03:32:36 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721964756.XXXXXX 00:04:21.202 03:32:36 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721964756.jZEgsz 00:04:21.202 03:32:36 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:04:21.202 03:32:36 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:04:21.202 03:32:36 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:04:21.202 03:32:36 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:04:21.202 03:32:36 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:04:21.202 03:32:36 -- common/autobuild_common.sh@463 -- $ get_config_params 00:04:21.202 03:32:36 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:04:21.202 03:32:36 -- common/autotest_common.sh@10 -- $ set +x 00:04:21.202 03:32:36 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:04:21.202 03:32:36 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:04:21.202 03:32:36 -- pm/common@17 -- $ local monitor 00:04:21.202 03:32:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:21.202 03:32:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:21.202 03:32:36 -- pm/common@25 -- $ sleep 1 00:04:21.202 03:32:36 -- pm/common@21 -- $ date +%s 00:04:21.202 03:32:36 -- pm/common@21 -- $ date +%s 00:04:21.202 03:32:36 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721964756 00:04:21.202 03:32:36 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721964756 00:04:21.202 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721964756_collect-vmstat.pm.log 00:04:21.202 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721964756_collect-cpu-load.pm.log 00:04:22.136 03:32:37 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:04:22.136 03:32:37 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:22.136 03:32:37 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:22.136 03:32:37 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:22.136 03:32:37 -- spdk/autobuild.sh@16 -- $ date -u 00:04:22.136 Fri Jul 26 03:32:37 AM UTC 2024 00:04:22.136 03:32:37 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:22.394 v24.09-pre-304-g764779691 00:04:22.394 03:32:37 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:04:22.394 03:32:37 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:04:22.394 03:32:37 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:04:22.394 03:32:37 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:04:22.394 03:32:37 -- common/autotest_common.sh@10 -- $ set +x 00:04:22.394 ************************************ 00:04:22.394 START TEST asan 00:04:22.394 ************************************ 00:04:22.394 using asan 00:04:22.394 03:32:37 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:04:22.394 00:04:22.394 real 0m0.000s 00:04:22.394 user 0m0.000s 00:04:22.394 sys 0m0.000s 00:04:22.394 03:32:37 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:04:22.394 ************************************ 00:04:22.394 03:32:37 asan -- common/autotest_common.sh@10 -- $ set +x 00:04:22.394 END TEST asan 00:04:22.394 ************************************ 00:04:22.394 03:32:37 -- common/autotest_common.sh@1142 -- $ return 0 00:04:22.394 03:32:37 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:22.394 03:32:37 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:22.394 03:32:37 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:04:22.394 03:32:37 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:04:22.394 03:32:37 -- common/autotest_common.sh@10 -- $ set +x 00:04:22.394 ************************************ 00:04:22.394 START TEST ubsan 00:04:22.394 ************************************ 00:04:22.394 using ubsan 00:04:22.394 03:32:37 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:04:22.394 00:04:22.394 real 0m0.000s 00:04:22.394 user 0m0.000s 00:04:22.394 sys 0m0.000s 00:04:22.394 03:32:37 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:04:22.394 ************************************ 00:04:22.394 03:32:37 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:22.394 END TEST ubsan 00:04:22.394 ************************************ 00:04:22.394 03:32:37 -- common/autotest_common.sh@1142 -- $ return 0 00:04:22.394 03:32:37 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:22.394 03:32:37 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:22.394 03:32:37 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:22.394 03:32:37 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:22.394 03:32:37 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:22.394 03:32:37 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:22.394 03:32:37 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:22.394 03:32:37 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:22.394 03:32:37 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:04:22.394 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:22.394 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:22.959 Using 'verbs' RDMA provider 00:04:36.097 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:50.981 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:51.239 Creating mk/config.mk...done. 00:04:51.239 Creating mk/cc.flags.mk...done. 00:04:51.239 Type 'make' to build. 00:04:51.239 03:33:05 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:04:51.239 03:33:05 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:04:51.239 03:33:05 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:04:51.239 03:33:05 -- common/autotest_common.sh@10 -- $ set +x 00:04:51.239 ************************************ 00:04:51.239 START TEST make 00:04:51.239 ************************************ 00:04:51.239 03:33:05 make -- common/autotest_common.sh@1123 -- $ make -j10 00:04:51.497 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:04:51.497 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:04:51.497 meson setup builddir \ 00:04:51.497 -Dwith-libaio=enabled \ 00:04:51.497 -Dwith-liburing=enabled \ 00:04:51.497 -Dwith-libvfn=disabled \ 00:04:51.497 -Dwith-spdk=false && \ 00:04:51.497 meson compile -C builddir && \ 00:04:51.497 cd -) 00:04:51.497 make[1]: Nothing to be done for 'all'. 00:04:56.801 The Meson build system 00:04:56.801 Version: 1.3.1 00:04:56.801 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:04:56.801 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:56.801 Build type: native build 00:04:56.801 Project name: xnvme 00:04:56.801 Project version: 0.7.3 00:04:56.801 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:04:56.801 C linker for the host machine: cc ld.bfd 2.39-16 00:04:56.801 Host machine cpu family: x86_64 00:04:56.801 Host machine cpu: x86_64 00:04:56.801 Message: host_machine.system: linux 00:04:56.801 Compiler for C supports arguments -Wno-missing-braces: YES 00:04:56.801 Compiler for C supports arguments -Wno-cast-function-type: YES 00:04:56.801 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:04:56.801 Run-time dependency threads found: YES 00:04:56.801 Has header "setupapi.h" : NO 00:04:56.801 Has header "linux/blkzoned.h" : YES 00:04:56.801 Has header "linux/blkzoned.h" : YES (cached) 00:04:56.801 Has header "libaio.h" : YES 00:04:56.801 Library aio found: YES 00:04:56.801 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:04:56.801 Run-time dependency liburing found: YES 2.2 00:04:56.801 Dependency libvfn skipped: feature with-libvfn disabled 00:04:56.801 Run-time dependency appleframeworks found: NO (tried framework) 00:04:56.801 Run-time dependency appleframeworks found: NO (tried framework) 00:04:56.801 Configuring xnvme_config.h using configuration 00:04:56.801 Configuring xnvme.spec using configuration 00:04:56.801 Run-time dependency bash-completion found: YES 2.11 00:04:56.801 Message: Bash-completions: /usr/share/bash-completion/completions 00:04:56.801 Program cp found: YES (/usr/bin/cp) 00:04:56.801 Has header "winsock2.h" : NO 00:04:56.801 Has header "dbghelp.h" : NO 00:04:56.801 Library rpcrt4 found: NO 00:04:56.801 Library rt found: YES 00:04:56.801 Checking for function "clock_gettime" with dependency -lrt: YES 00:04:56.801 Found CMake: /usr/bin/cmake (3.27.7) 00:04:56.801 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:04:56.801 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:04:56.801 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:04:56.801 Build targets in project: 32 00:04:56.801 00:04:56.801 xnvme 0.7.3 00:04:56.801 00:04:56.802 User defined options 00:04:56.802 with-libaio : enabled 00:04:56.802 with-liburing: enabled 00:04:56.802 with-libvfn : disabled 00:04:56.802 with-spdk : false 00:04:56.802 00:04:56.802 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:57.060 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:04:57.318 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:04:57.318 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:04:57.318 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:04:57.318 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:04:57.318 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:04:57.587 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:04:57.587 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:04:57.587 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:04:57.587 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:04:57.587 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:04:57.587 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:04:57.587 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:04:57.587 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:04:57.587 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:04:57.858 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:04:57.858 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:04:57.858 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:04:57.858 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:04:57.858 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:04:57.858 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:04:57.858 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:04:57.858 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:04:57.858 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:04:57.858 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:04:57.858 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:04:58.117 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:04:58.117 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:04:58.117 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:04:58.117 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:04:58.117 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:04:58.117 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:04:58.117 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:04:58.117 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:04:58.117 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:04:58.117 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:04:58.117 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:04:58.117 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:04:58.117 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:04:58.117 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:04:58.117 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:04:58.117 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:04:58.117 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:04:58.117 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:04:58.117 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:04:58.117 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:04:58.117 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:04:58.117 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:04:58.376 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:04:58.376 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:04:58.376 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:04:58.376 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:04:58.376 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:04:58.376 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:04:58.376 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:04:58.376 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:04:58.376 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:04:58.376 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:04:58.634 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:04:58.634 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:04:58.634 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:04:58.634 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:04:58.634 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:04:58.634 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:04:58.634 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:04:58.634 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:04:58.634 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:04:58.634 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:04:58.893 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:04:58.893 [69/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:04:58.893 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:04:58.893 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:04:58.893 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:04:58.893 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:04:58.893 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:04:58.893 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:04:58.893 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:04:59.157 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:04:59.157 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:04:59.157 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:04:59.157 [80/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:04:59.157 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:04:59.157 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:04:59.157 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:04:59.418 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:04:59.418 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:04:59.418 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:04:59.418 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:04:59.418 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:04:59.418 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:04:59.418 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:04:59.418 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:04:59.418 [92/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:04:59.418 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:04:59.418 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:04:59.418 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:04:59.677 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:04:59.677 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:04:59.677 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:04:59.677 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:04:59.677 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:04:59.677 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:04:59.677 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:04:59.677 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:04:59.677 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:04:59.677 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:04:59.677 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:04:59.677 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:04:59.677 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:04:59.677 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:04:59.677 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:04:59.677 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:04:59.677 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:04:59.677 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:04:59.677 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:04:59.677 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:04:59.935 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:04:59.935 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:04:59.935 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:04:59.935 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:04:59.935 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:04:59.935 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:04:59.935 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:04:59.935 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:04:59.935 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:04:59.935 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:05:00.252 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:05:00.252 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:05:00.252 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:05:00.252 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:05:00.252 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:05:00.252 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:05:00.252 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:05:00.252 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:05:00.252 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:05:00.252 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:05:00.252 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:05:00.252 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:05:00.512 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:05:00.512 [139/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:05:00.512 [140/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:05:00.512 [141/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:05:00.512 [142/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:05:00.772 [143/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:05:00.772 [144/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:05:00.772 [145/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:05:00.772 [146/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:05:00.772 [147/203] Linking target lib/libxnvme.so 00:05:00.772 [148/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:05:00.772 [149/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:05:00.772 [150/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:05:00.772 [151/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:05:01.030 [152/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:05:01.030 [153/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:05:01.030 [154/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:05:01.030 [155/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:05:01.030 [156/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:05:01.030 [157/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:05:01.030 [158/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:05:01.030 [159/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:05:01.304 [160/203] Compiling C object tools/xdd.p/xdd.c.o 00:05:01.304 [161/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:05:01.304 [162/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:05:01.304 [163/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:05:01.304 [164/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:05:01.304 [165/203] Compiling C object tools/lblk.p/lblk.c.o 00:05:01.304 [166/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:05:01.563 [167/203] Compiling C object tools/kvs.p/kvs.c.o 00:05:01.563 [168/203] Compiling C object tools/zoned.p/zoned.c.o 00:05:01.563 [169/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:05:01.563 [170/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:05:01.563 [171/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:05:01.821 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:05:01.821 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:05:01.821 [174/203] Linking static target lib/libxnvme.a 00:05:02.080 [175/203] Linking target tests/xnvme_tests_buf 00:05:02.080 [176/203] Linking target tests/xnvme_tests_async_intf 00:05:02.080 [177/203] Linking target tests/xnvme_tests_enum 00:05:02.080 [178/203] Linking target tests/xnvme_tests_cli 00:05:02.080 [179/203] Linking target tests/xnvme_tests_xnvme_cli 00:05:02.080 [180/203] Linking target tests/xnvme_tests_xnvme_file 00:05:02.080 [181/203] Linking target tests/xnvme_tests_lblk 00:05:02.080 [182/203] Linking target tests/xnvme_tests_znd_explicit_open 00:05:02.080 [183/203] Linking target tests/xnvme_tests_znd_state 00:05:02.080 [184/203] Linking target tests/xnvme_tests_scc 00:05:02.080 [185/203] Linking target tests/xnvme_tests_ioworker 00:05:02.080 [186/203] Linking target tests/xnvme_tests_znd_append 00:05:02.080 [187/203] Linking target tests/xnvme_tests_znd_zrwa 00:05:02.080 [188/203] Linking target tests/xnvme_tests_map 00:05:02.080 [189/203] Linking target tools/lblk 00:05:02.080 [190/203] Linking target tests/xnvme_tests_kvs 00:05:02.080 [191/203] Linking target tools/xdd 00:05:02.080 [192/203] Linking target examples/xnvme_enum 00:05:02.080 [193/203] Linking target examples/xnvme_dev 00:05:02.080 [194/203] Linking target tools/kvs 00:05:02.080 [195/203] Linking target examples/xnvme_single_async 00:05:02.080 [196/203] Linking target tools/xnvme_file 00:05:02.080 [197/203] Linking target tools/xnvme 00:05:02.080 [198/203] Linking target tools/zoned 00:05:02.338 [199/203] Linking target examples/zoned_io_async 00:05:02.338 [200/203] Linking target examples/zoned_io_sync 00:05:02.338 [201/203] Linking target examples/xnvme_hello 00:05:02.338 [202/203] Linking target examples/xnvme_io_async 00:05:02.338 [203/203] Linking target examples/xnvme_single_sync 00:05:02.338 INFO: autodetecting backend as ninja 00:05:02.338 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:05:02.338 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:05:17.266 The Meson build system 00:05:17.266 Version: 1.3.1 00:05:17.266 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:05:17.266 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:05:17.266 Build type: native build 00:05:17.266 Program cat found: YES (/usr/bin/cat) 00:05:17.266 Project name: DPDK 00:05:17.266 Project version: 24.03.0 00:05:17.266 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:05:17.267 C linker for the host machine: cc ld.bfd 2.39-16 00:05:17.267 Host machine cpu family: x86_64 00:05:17.267 Host machine cpu: x86_64 00:05:17.267 Message: ## Building in Developer Mode ## 00:05:17.267 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:17.267 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:05:17.267 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:17.267 Program python3 found: YES (/usr/bin/python3) 00:05:17.267 Program cat found: YES (/usr/bin/cat) 00:05:17.267 Compiler for C supports arguments -march=native: YES 00:05:17.267 Checking for size of "void *" : 8 00:05:17.267 Checking for size of "void *" : 8 (cached) 00:05:17.267 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:05:17.267 Library m found: YES 00:05:17.267 Library numa found: YES 00:05:17.267 Has header "numaif.h" : YES 00:05:17.267 Library fdt found: NO 00:05:17.267 Library execinfo found: NO 00:05:17.267 Has header "execinfo.h" : YES 00:05:17.267 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:05:17.267 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:17.267 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:17.267 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:17.267 Run-time dependency openssl found: YES 3.0.9 00:05:17.267 Run-time dependency libpcap found: YES 1.10.4 00:05:17.267 Has header "pcap.h" with dependency libpcap: YES 00:05:17.267 Compiler for C supports arguments -Wcast-qual: YES 00:05:17.267 Compiler for C supports arguments -Wdeprecated: YES 00:05:17.267 Compiler for C supports arguments -Wformat: YES 00:05:17.267 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:17.267 Compiler for C supports arguments -Wformat-security: NO 00:05:17.267 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:17.267 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:17.267 Compiler for C supports arguments -Wnested-externs: YES 00:05:17.267 Compiler for C supports arguments -Wold-style-definition: YES 00:05:17.267 Compiler for C supports arguments -Wpointer-arith: YES 00:05:17.267 Compiler for C supports arguments -Wsign-compare: YES 00:05:17.267 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:17.267 Compiler for C supports arguments -Wundef: YES 00:05:17.267 Compiler for C supports arguments -Wwrite-strings: YES 00:05:17.267 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:17.267 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:17.267 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:17.267 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:17.267 Program objdump found: YES (/usr/bin/objdump) 00:05:17.267 Compiler for C supports arguments -mavx512f: YES 00:05:17.267 Checking if "AVX512 checking" compiles: YES 00:05:17.267 Fetching value of define "__SSE4_2__" : 1 00:05:17.267 Fetching value of define "__AES__" : 1 00:05:17.267 Fetching value of define "__AVX__" : 1 00:05:17.267 Fetching value of define "__AVX2__" : 1 00:05:17.267 Fetching value of define "__AVX512BW__" : (undefined) 00:05:17.267 Fetching value of define "__AVX512CD__" : (undefined) 00:05:17.267 Fetching value of define "__AVX512DQ__" : (undefined) 00:05:17.267 Fetching value of define "__AVX512F__" : (undefined) 00:05:17.267 Fetching value of define "__AVX512VL__" : (undefined) 00:05:17.267 Fetching value of define "__PCLMUL__" : 1 00:05:17.267 Fetching value of define "__RDRND__" : 1 00:05:17.267 Fetching value of define "__RDSEED__" : 1 00:05:17.267 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:17.267 Fetching value of define "__znver1__" : (undefined) 00:05:17.267 Fetching value of define "__znver2__" : (undefined) 00:05:17.267 Fetching value of define "__znver3__" : (undefined) 00:05:17.267 Fetching value of define "__znver4__" : (undefined) 00:05:17.267 Library asan found: YES 00:05:17.267 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:17.267 Message: lib/log: Defining dependency "log" 00:05:17.267 Message: lib/kvargs: Defining dependency "kvargs" 00:05:17.267 Message: lib/telemetry: Defining dependency "telemetry" 00:05:17.267 Library rt found: YES 00:05:17.267 Checking for function "getentropy" : NO 00:05:17.267 Message: lib/eal: Defining dependency "eal" 00:05:17.267 Message: lib/ring: Defining dependency "ring" 00:05:17.267 Message: lib/rcu: Defining dependency "rcu" 00:05:17.267 Message: lib/mempool: Defining dependency "mempool" 00:05:17.267 Message: lib/mbuf: Defining dependency "mbuf" 00:05:17.267 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:17.267 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:05:17.267 Compiler for C supports arguments -mpclmul: YES 00:05:17.267 Compiler for C supports arguments -maes: YES 00:05:17.267 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:17.267 Compiler for C supports arguments -mavx512bw: YES 00:05:17.267 Compiler for C supports arguments -mavx512dq: YES 00:05:17.267 Compiler for C supports arguments -mavx512vl: YES 00:05:17.267 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:17.267 Compiler for C supports arguments -mavx2: YES 00:05:17.267 Compiler for C supports arguments -mavx: YES 00:05:17.267 Message: lib/net: Defining dependency "net" 00:05:17.267 Message: lib/meter: Defining dependency "meter" 00:05:17.267 Message: lib/ethdev: Defining dependency "ethdev" 00:05:17.267 Message: lib/pci: Defining dependency "pci" 00:05:17.267 Message: lib/cmdline: Defining dependency "cmdline" 00:05:17.267 Message: lib/hash: Defining dependency "hash" 00:05:17.267 Message: lib/timer: Defining dependency "timer" 00:05:17.267 Message: lib/compressdev: Defining dependency "compressdev" 00:05:17.267 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:17.267 Message: lib/dmadev: Defining dependency "dmadev" 00:05:17.267 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:17.267 Message: lib/power: Defining dependency "power" 00:05:17.267 Message: lib/reorder: Defining dependency "reorder" 00:05:17.267 Message: lib/security: Defining dependency "security" 00:05:17.267 Has header "linux/userfaultfd.h" : YES 00:05:17.267 Has header "linux/vduse.h" : YES 00:05:17.267 Message: lib/vhost: Defining dependency "vhost" 00:05:17.267 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:17.267 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:17.267 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:17.267 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:17.267 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:17.267 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:17.267 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:17.267 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:17.267 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:17.267 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:17.267 Program doxygen found: YES (/usr/bin/doxygen) 00:05:17.267 Configuring doxy-api-html.conf using configuration 00:05:17.267 Configuring doxy-api-man.conf using configuration 00:05:17.267 Program mandb found: YES (/usr/bin/mandb) 00:05:17.267 Program sphinx-build found: NO 00:05:17.267 Configuring rte_build_config.h using configuration 00:05:17.267 Message: 00:05:17.267 ================= 00:05:17.267 Applications Enabled 00:05:17.267 ================= 00:05:17.267 00:05:17.267 apps: 00:05:17.267 00:05:17.267 00:05:17.267 Message: 00:05:17.267 ================= 00:05:17.267 Libraries Enabled 00:05:17.267 ================= 00:05:17.267 00:05:17.267 libs: 00:05:17.267 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:17.267 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:17.267 cryptodev, dmadev, power, reorder, security, vhost, 00:05:17.267 00:05:17.267 Message: 00:05:17.267 =============== 00:05:17.267 Drivers Enabled 00:05:17.267 =============== 00:05:17.267 00:05:17.267 common: 00:05:17.267 00:05:17.267 bus: 00:05:17.267 pci, vdev, 00:05:17.267 mempool: 00:05:17.267 ring, 00:05:17.267 dma: 00:05:17.267 00:05:17.267 net: 00:05:17.267 00:05:17.267 crypto: 00:05:17.267 00:05:17.267 compress: 00:05:17.267 00:05:17.267 vdpa: 00:05:17.267 00:05:17.267 00:05:17.267 Message: 00:05:17.267 ================= 00:05:17.267 Content Skipped 00:05:17.267 ================= 00:05:17.267 00:05:17.267 apps: 00:05:17.267 dumpcap: explicitly disabled via build config 00:05:17.267 graph: explicitly disabled via build config 00:05:17.267 pdump: explicitly disabled via build config 00:05:17.267 proc-info: explicitly disabled via build config 00:05:17.267 test-acl: explicitly disabled via build config 00:05:17.267 test-bbdev: explicitly disabled via build config 00:05:17.267 test-cmdline: explicitly disabled via build config 00:05:17.267 test-compress-perf: explicitly disabled via build config 00:05:17.268 test-crypto-perf: explicitly disabled via build config 00:05:17.268 test-dma-perf: explicitly disabled via build config 00:05:17.268 test-eventdev: explicitly disabled via build config 00:05:17.268 test-fib: explicitly disabled via build config 00:05:17.268 test-flow-perf: explicitly disabled via build config 00:05:17.268 test-gpudev: explicitly disabled via build config 00:05:17.268 test-mldev: explicitly disabled via build config 00:05:17.268 test-pipeline: explicitly disabled via build config 00:05:17.268 test-pmd: explicitly disabled via build config 00:05:17.268 test-regex: explicitly disabled via build config 00:05:17.268 test-sad: explicitly disabled via build config 00:05:17.268 test-security-perf: explicitly disabled via build config 00:05:17.268 00:05:17.268 libs: 00:05:17.268 argparse: explicitly disabled via build config 00:05:17.268 metrics: explicitly disabled via build config 00:05:17.268 acl: explicitly disabled via build config 00:05:17.268 bbdev: explicitly disabled via build config 00:05:17.268 bitratestats: explicitly disabled via build config 00:05:17.268 bpf: explicitly disabled via build config 00:05:17.268 cfgfile: explicitly disabled via build config 00:05:17.268 distributor: explicitly disabled via build config 00:05:17.268 efd: explicitly disabled via build config 00:05:17.268 eventdev: explicitly disabled via build config 00:05:17.268 dispatcher: explicitly disabled via build config 00:05:17.268 gpudev: explicitly disabled via build config 00:05:17.268 gro: explicitly disabled via build config 00:05:17.268 gso: explicitly disabled via build config 00:05:17.268 ip_frag: explicitly disabled via build config 00:05:17.268 jobstats: explicitly disabled via build config 00:05:17.268 latencystats: explicitly disabled via build config 00:05:17.268 lpm: explicitly disabled via build config 00:05:17.268 member: explicitly disabled via build config 00:05:17.268 pcapng: explicitly disabled via build config 00:05:17.268 rawdev: explicitly disabled via build config 00:05:17.268 regexdev: explicitly disabled via build config 00:05:17.268 mldev: explicitly disabled via build config 00:05:17.268 rib: explicitly disabled via build config 00:05:17.268 sched: explicitly disabled via build config 00:05:17.268 stack: explicitly disabled via build config 00:05:17.268 ipsec: explicitly disabled via build config 00:05:17.268 pdcp: explicitly disabled via build config 00:05:17.268 fib: explicitly disabled via build config 00:05:17.268 port: explicitly disabled via build config 00:05:17.268 pdump: explicitly disabled via build config 00:05:17.268 table: explicitly disabled via build config 00:05:17.268 pipeline: explicitly disabled via build config 00:05:17.268 graph: explicitly disabled via build config 00:05:17.268 node: explicitly disabled via build config 00:05:17.268 00:05:17.268 drivers: 00:05:17.268 common/cpt: not in enabled drivers build config 00:05:17.268 common/dpaax: not in enabled drivers build config 00:05:17.268 common/iavf: not in enabled drivers build config 00:05:17.268 common/idpf: not in enabled drivers build config 00:05:17.268 common/ionic: not in enabled drivers build config 00:05:17.268 common/mvep: not in enabled drivers build config 00:05:17.268 common/octeontx: not in enabled drivers build config 00:05:17.268 bus/auxiliary: not in enabled drivers build config 00:05:17.268 bus/cdx: not in enabled drivers build config 00:05:17.268 bus/dpaa: not in enabled drivers build config 00:05:17.268 bus/fslmc: not in enabled drivers build config 00:05:17.268 bus/ifpga: not in enabled drivers build config 00:05:17.268 bus/platform: not in enabled drivers build config 00:05:17.268 bus/uacce: not in enabled drivers build config 00:05:17.268 bus/vmbus: not in enabled drivers build config 00:05:17.268 common/cnxk: not in enabled drivers build config 00:05:17.268 common/mlx5: not in enabled drivers build config 00:05:17.268 common/nfp: not in enabled drivers build config 00:05:17.268 common/nitrox: not in enabled drivers build config 00:05:17.268 common/qat: not in enabled drivers build config 00:05:17.268 common/sfc_efx: not in enabled drivers build config 00:05:17.268 mempool/bucket: not in enabled drivers build config 00:05:17.268 mempool/cnxk: not in enabled drivers build config 00:05:17.268 mempool/dpaa: not in enabled drivers build config 00:05:17.268 mempool/dpaa2: not in enabled drivers build config 00:05:17.268 mempool/octeontx: not in enabled drivers build config 00:05:17.268 mempool/stack: not in enabled drivers build config 00:05:17.268 dma/cnxk: not in enabled drivers build config 00:05:17.268 dma/dpaa: not in enabled drivers build config 00:05:17.268 dma/dpaa2: not in enabled drivers build config 00:05:17.268 dma/hisilicon: not in enabled drivers build config 00:05:17.268 dma/idxd: not in enabled drivers build config 00:05:17.268 dma/ioat: not in enabled drivers build config 00:05:17.268 dma/skeleton: not in enabled drivers build config 00:05:17.268 net/af_packet: not in enabled drivers build config 00:05:17.268 net/af_xdp: not in enabled drivers build config 00:05:17.268 net/ark: not in enabled drivers build config 00:05:17.268 net/atlantic: not in enabled drivers build config 00:05:17.268 net/avp: not in enabled drivers build config 00:05:17.268 net/axgbe: not in enabled drivers build config 00:05:17.268 net/bnx2x: not in enabled drivers build config 00:05:17.268 net/bnxt: not in enabled drivers build config 00:05:17.268 net/bonding: not in enabled drivers build config 00:05:17.268 net/cnxk: not in enabled drivers build config 00:05:17.268 net/cpfl: not in enabled drivers build config 00:05:17.268 net/cxgbe: not in enabled drivers build config 00:05:17.268 net/dpaa: not in enabled drivers build config 00:05:17.268 net/dpaa2: not in enabled drivers build config 00:05:17.268 net/e1000: not in enabled drivers build config 00:05:17.268 net/ena: not in enabled drivers build config 00:05:17.268 net/enetc: not in enabled drivers build config 00:05:17.268 net/enetfec: not in enabled drivers build config 00:05:17.268 net/enic: not in enabled drivers build config 00:05:17.268 net/failsafe: not in enabled drivers build config 00:05:17.268 net/fm10k: not in enabled drivers build config 00:05:17.268 net/gve: not in enabled drivers build config 00:05:17.268 net/hinic: not in enabled drivers build config 00:05:17.268 net/hns3: not in enabled drivers build config 00:05:17.268 net/i40e: not in enabled drivers build config 00:05:17.268 net/iavf: not in enabled drivers build config 00:05:17.268 net/ice: not in enabled drivers build config 00:05:17.268 net/idpf: not in enabled drivers build config 00:05:17.268 net/igc: not in enabled drivers build config 00:05:17.268 net/ionic: not in enabled drivers build config 00:05:17.268 net/ipn3ke: not in enabled drivers build config 00:05:17.268 net/ixgbe: not in enabled drivers build config 00:05:17.268 net/mana: not in enabled drivers build config 00:05:17.268 net/memif: not in enabled drivers build config 00:05:17.268 net/mlx4: not in enabled drivers build config 00:05:17.268 net/mlx5: not in enabled drivers build config 00:05:17.268 net/mvneta: not in enabled drivers build config 00:05:17.268 net/mvpp2: not in enabled drivers build config 00:05:17.268 net/netvsc: not in enabled drivers build config 00:05:17.268 net/nfb: not in enabled drivers build config 00:05:17.268 net/nfp: not in enabled drivers build config 00:05:17.268 net/ngbe: not in enabled drivers build config 00:05:17.268 net/null: not in enabled drivers build config 00:05:17.268 net/octeontx: not in enabled drivers build config 00:05:17.268 net/octeon_ep: not in enabled drivers build config 00:05:17.268 net/pcap: not in enabled drivers build config 00:05:17.268 net/pfe: not in enabled drivers build config 00:05:17.268 net/qede: not in enabled drivers build config 00:05:17.268 net/ring: not in enabled drivers build config 00:05:17.268 net/sfc: not in enabled drivers build config 00:05:17.268 net/softnic: not in enabled drivers build config 00:05:17.268 net/tap: not in enabled drivers build config 00:05:17.268 net/thunderx: not in enabled drivers build config 00:05:17.268 net/txgbe: not in enabled drivers build config 00:05:17.268 net/vdev_netvsc: not in enabled drivers build config 00:05:17.268 net/vhost: not in enabled drivers build config 00:05:17.268 net/virtio: not in enabled drivers build config 00:05:17.268 net/vmxnet3: not in enabled drivers build config 00:05:17.268 raw/*: missing internal dependency, "rawdev" 00:05:17.268 crypto/armv8: not in enabled drivers build config 00:05:17.268 crypto/bcmfs: not in enabled drivers build config 00:05:17.268 crypto/caam_jr: not in enabled drivers build config 00:05:17.268 crypto/ccp: not in enabled drivers build config 00:05:17.268 crypto/cnxk: not in enabled drivers build config 00:05:17.268 crypto/dpaa_sec: not in enabled drivers build config 00:05:17.268 crypto/dpaa2_sec: not in enabled drivers build config 00:05:17.268 crypto/ipsec_mb: not in enabled drivers build config 00:05:17.268 crypto/mlx5: not in enabled drivers build config 00:05:17.268 crypto/mvsam: not in enabled drivers build config 00:05:17.268 crypto/nitrox: not in enabled drivers build config 00:05:17.268 crypto/null: not in enabled drivers build config 00:05:17.268 crypto/octeontx: not in enabled drivers build config 00:05:17.269 crypto/openssl: not in enabled drivers build config 00:05:17.269 crypto/scheduler: not in enabled drivers build config 00:05:17.269 crypto/uadk: not in enabled drivers build config 00:05:17.269 crypto/virtio: not in enabled drivers build config 00:05:17.269 compress/isal: not in enabled drivers build config 00:05:17.269 compress/mlx5: not in enabled drivers build config 00:05:17.269 compress/nitrox: not in enabled drivers build config 00:05:17.269 compress/octeontx: not in enabled drivers build config 00:05:17.269 compress/zlib: not in enabled drivers build config 00:05:17.269 regex/*: missing internal dependency, "regexdev" 00:05:17.269 ml/*: missing internal dependency, "mldev" 00:05:17.269 vdpa/ifc: not in enabled drivers build config 00:05:17.269 vdpa/mlx5: not in enabled drivers build config 00:05:17.269 vdpa/nfp: not in enabled drivers build config 00:05:17.269 vdpa/sfc: not in enabled drivers build config 00:05:17.269 event/*: missing internal dependency, "eventdev" 00:05:17.269 baseband/*: missing internal dependency, "bbdev" 00:05:17.269 gpu/*: missing internal dependency, "gpudev" 00:05:17.269 00:05:17.269 00:05:17.835 Build targets in project: 85 00:05:17.835 00:05:17.835 DPDK 24.03.0 00:05:17.835 00:05:17.835 User defined options 00:05:17.835 buildtype : debug 00:05:17.835 default_library : shared 00:05:17.835 libdir : lib 00:05:17.835 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:17.835 b_sanitize : address 00:05:17.835 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:05:17.835 c_link_args : 00:05:17.835 cpu_instruction_set: native 00:05:17.835 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:05:17.836 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:05:17.836 enable_docs : false 00:05:17.836 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:05:17.836 enable_kmods : false 00:05:17.836 max_lcores : 128 00:05:17.836 tests : false 00:05:17.836 00:05:17.836 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:19.247 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:05:19.247 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:19.508 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:19.508 [3/268] Linking static target lib/librte_kvargs.a 00:05:19.508 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:19.508 [5/268] Linking static target lib/librte_log.a 00:05:19.508 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:20.086 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:20.344 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:20.602 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:20.861 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:20.861 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:20.861 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:20.861 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:21.155 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:21.155 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:21.155 [16/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:21.155 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:21.155 [18/268] Linking static target lib/librte_telemetry.a 00:05:21.155 [19/268] Linking target lib/librte_log.so.24.1 00:05:21.447 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:21.707 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:05:21.707 [22/268] Linking target lib/librte_kvargs.so.24.1 00:05:21.966 [23/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:05:22.226 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:22.485 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:22.485 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:22.485 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:22.485 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:22.756 [29/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:22.756 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:23.028 [31/268] Linking target lib/librte_telemetry.so.24.1 00:05:23.028 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:23.028 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:23.028 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:23.286 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:23.286 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:05:23.544 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:24.190 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:24.190 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:24.190 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:24.190 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:24.190 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:24.190 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:24.449 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:24.708 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:24.708 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:24.966 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:25.224 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:25.224 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:25.484 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:25.745 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:25.745 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:26.036 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:26.036 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:26.300 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:26.300 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:26.558 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:27.130 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:27.130 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:27.130 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:27.130 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:27.388 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:27.388 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:27.388 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:27.646 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:27.646 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:27.904 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:28.470 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:28.728 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:28.728 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:29.003 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:29.003 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:29.264 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:29.264 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:29.264 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:29.264 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:29.264 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:29.522 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:29.522 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:30.088 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:30.088 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:30.655 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:30.655 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:30.913 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:31.171 [85/268] Linking static target lib/librte_eal.a 00:05:31.171 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:31.171 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:31.171 [88/268] Linking static target lib/librte_ring.a 00:05:31.429 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:31.689 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:31.955 [91/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:32.238 [92/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:32.238 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:32.238 [94/268] Linking static target lib/librte_rcu.a 00:05:32.238 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:32.238 [96/268] Linking static target lib/librte_mempool.a 00:05:32.502 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:32.760 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:33.018 [99/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:33.018 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:33.278 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:33.536 [102/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:33.795 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:33.795 [104/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:34.054 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:34.054 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:34.054 [107/268] Linking static target lib/librte_net.a 00:05:34.313 [108/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:34.313 [109/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:34.313 [110/268] Linking static target lib/librte_meter.a 00:05:34.571 [111/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:34.571 [112/268] Linking static target lib/librte_mbuf.a 00:05:34.830 [113/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:34.830 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:34.830 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:34.830 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:35.088 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:35.347 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:36.282 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:36.282 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:36.282 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:36.541 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:36.800 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:37.060 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:37.060 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:37.060 [126/268] Linking static target lib/librte_pci.a 00:05:37.318 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:37.577 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:37.577 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:37.836 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:37.836 [131/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:37.836 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:37.836 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:38.094 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:38.094 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:38.094 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:38.094 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:38.094 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:38.094 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:38.402 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:38.402 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:38.402 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:38.402 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:38.402 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:38.975 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:38.975 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:38.975 [147/268] Linking static target lib/librte_cmdline.a 00:05:39.234 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:39.234 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:39.234 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:39.803 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:39.803 [152/268] Linking static target lib/librte_timer.a 00:05:39.803 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:40.371 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:40.371 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:40.371 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:40.371 [157/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:40.630 [158/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:40.889 [159/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:40.889 [160/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:40.889 [161/268] Linking static target lib/librte_ethdev.a 00:05:41.147 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:41.147 [163/268] Linking static target lib/librte_compressdev.a 00:05:41.147 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:41.409 [165/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:41.667 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:41.667 [167/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:41.667 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:41.667 [169/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:41.667 [170/268] Linking static target lib/librte_dmadev.a 00:05:41.667 [171/268] Linking static target lib/librte_hash.a 00:05:41.926 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:41.926 [173/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:42.185 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:42.185 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:42.444 [176/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:42.702 [177/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:42.702 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:42.702 [179/268] Linking static target lib/librte_cryptodev.a 00:05:42.702 [180/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:42.960 [181/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:42.960 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:42.960 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:42.960 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:43.527 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:43.527 [186/268] Linking static target lib/librte_power.a 00:05:43.785 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:43.785 [188/268] Linking static target lib/librte_reorder.a 00:05:43.785 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:43.785 [190/268] Linking static target lib/librte_security.a 00:05:43.785 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:44.043 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:44.043 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:44.043 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:44.301 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:44.301 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:44.559 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:44.559 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:44.822 [199/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:44.822 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:45.090 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:45.090 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:45.090 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:45.090 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:45.348 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:45.348 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:45.349 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:45.607 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:45.607 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:45.607 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:45.607 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:45.866 [212/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:45.866 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:45.866 [214/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:45.866 [215/268] Linking target lib/librte_eal.so.24.1 00:05:45.866 [216/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:45.866 [217/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:45.866 [218/268] Linking static target drivers/librte_bus_vdev.a 00:05:45.866 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:45.866 [220/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:45.866 [221/268] Linking static target drivers/librte_bus_pci.a 00:05:46.125 [222/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:46.125 [223/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:46.125 [224/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:46.125 [225/268] Linking target lib/librte_meter.so.24.1 00:05:46.125 [226/268] Linking target lib/librte_ring.so.24.1 00:05:46.125 [227/268] Linking target lib/librte_timer.so.24.1 00:05:46.125 [228/268] Linking target lib/librte_pci.so.24.1 00:05:46.125 [229/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:46.125 [230/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:46.383 [231/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:46.383 [232/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:46.383 [233/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:46.383 [234/268] Linking target lib/librte_dmadev.so.24.1 00:05:46.383 [235/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.383 [236/268] Linking target lib/librte_rcu.so.24.1 00:05:46.383 [237/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:46.383 [238/268] Linking target lib/librte_mempool.so.24.1 00:05:46.383 [239/268] Linking static target drivers/librte_mempool_ring.a 00:05:46.383 [240/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:46.383 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:05:46.383 [242/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:46.383 [243/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:46.383 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:46.383 [245/268] Linking target lib/librte_mbuf.so.24.1 00:05:46.641 [246/268] Linking target drivers/librte_mempool_ring.so.24.1 00:05:46.641 [247/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.641 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:05:46.641 [249/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:46.641 [250/268] Linking target lib/librte_net.so.24.1 00:05:46.641 [251/268] Linking target lib/librte_compressdev.so.24.1 00:05:46.641 [252/268] Linking target lib/librte_cryptodev.so.24.1 00:05:46.641 [253/268] Linking target lib/librte_reorder.so.24.1 00:05:46.900 [254/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:46.900 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:46.900 [256/268] Linking target lib/librte_cmdline.so.24.1 00:05:46.900 [257/268] Linking target lib/librte_hash.so.24.1 00:05:46.900 [258/268] Linking target lib/librte_security.so.24.1 00:05:47.158 [259/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:47.726 [260/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:48.667 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:48.926 [262/268] Linking target lib/librte_ethdev.so.24.1 00:05:48.926 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:49.184 [264/268] Linking target lib/librte_power.so.24.1 00:05:52.508 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:52.508 [266/268] Linking static target lib/librte_vhost.a 00:05:53.882 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:53.882 [268/268] Linking target lib/librte_vhost.so.24.1 00:05:53.882 INFO: autodetecting backend as ninja 00:05:53.882 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:05:54.815 CC lib/ut/ut.o 00:05:54.815 CC lib/ut_mock/mock.o 00:05:54.815 CC lib/log/log.o 00:05:54.815 CC lib/log/log_deprecated.o 00:05:54.815 CC lib/log/log_flags.o 00:05:55.074 LIB libspdk_ut_mock.a 00:05:55.074 SO libspdk_ut_mock.so.6.0 00:05:55.074 LIB libspdk_ut.a 00:05:55.074 LIB libspdk_log.a 00:05:55.074 SYMLINK libspdk_ut_mock.so 00:05:55.074 SO libspdk_ut.so.2.0 00:05:55.074 SO libspdk_log.so.7.0 00:05:55.074 SYMLINK libspdk_ut.so 00:05:55.332 SYMLINK libspdk_log.so 00:05:55.332 CC lib/ioat/ioat.o 00:05:55.332 CC lib/util/base64.o 00:05:55.332 CC lib/util/bit_array.o 00:05:55.332 CC lib/util/crc16.o 00:05:55.332 CC lib/dma/dma.o 00:05:55.332 CC lib/util/cpuset.o 00:05:55.332 CC lib/util/crc32.o 00:05:55.332 CC lib/util/crc32c.o 00:05:55.332 CXX lib/trace_parser/trace.o 00:05:55.590 CC lib/vfio_user/host/vfio_user_pci.o 00:05:55.590 CC lib/vfio_user/host/vfio_user.o 00:05:55.590 CC lib/util/crc32_ieee.o 00:05:55.590 LIB libspdk_dma.a 00:05:55.590 CC lib/util/crc64.o 00:05:55.590 CC lib/util/dif.o 00:05:55.590 SO libspdk_dma.so.4.0 00:05:55.848 LIB libspdk_ioat.a 00:05:55.848 CC lib/util/fd.o 00:05:55.848 CC lib/util/fd_group.o 00:05:55.848 SYMLINK libspdk_dma.so 00:05:55.848 CC lib/util/file.o 00:05:55.848 CC lib/util/hexlify.o 00:05:55.848 SO libspdk_ioat.so.7.0 00:05:55.848 CC lib/util/iov.o 00:05:55.848 CC lib/util/math.o 00:05:55.848 SYMLINK libspdk_ioat.so 00:05:55.848 CC lib/util/net.o 00:05:56.106 CC lib/util/pipe.o 00:05:56.106 CC lib/util/strerror_tls.o 00:05:56.106 CC lib/util/string.o 00:05:56.106 CC lib/util/uuid.o 00:05:56.106 CC lib/util/xor.o 00:05:56.106 CC lib/util/zipf.o 00:05:56.106 LIB libspdk_vfio_user.a 00:05:56.106 SO libspdk_vfio_user.so.5.0 00:05:56.364 SYMLINK libspdk_vfio_user.so 00:05:56.621 LIB libspdk_util.a 00:05:56.621 SO libspdk_util.so.10.0 00:05:56.880 SYMLINK libspdk_util.so 00:05:57.138 LIB libspdk_trace_parser.a 00:05:57.138 CC lib/rdma_utils/rdma_utils.o 00:05:57.138 CC lib/vmd/led.o 00:05:57.138 CC lib/vmd/vmd.o 00:05:57.138 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:57.138 CC lib/rdma_provider/common.o 00:05:57.138 CC lib/json/json_parse.o 00:05:57.138 CC lib/conf/conf.o 00:05:57.138 CC lib/idxd/idxd.o 00:05:57.138 CC lib/env_dpdk/env.o 00:05:57.138 SO libspdk_trace_parser.so.5.0 00:05:57.488 SYMLINK libspdk_trace_parser.so 00:05:57.488 CC lib/idxd/idxd_user.o 00:05:57.488 CC lib/idxd/idxd_kernel.o 00:05:57.488 CC lib/json/json_util.o 00:05:57.488 CC lib/json/json_write.o 00:05:57.488 LIB libspdk_rdma_provider.a 00:05:57.488 LIB libspdk_conf.a 00:05:57.488 SO libspdk_conf.so.6.0 00:05:57.488 SO libspdk_rdma_provider.so.6.0 00:05:57.756 SYMLINK libspdk_conf.so 00:05:57.756 CC lib/env_dpdk/memory.o 00:05:57.756 CC lib/env_dpdk/pci.o 00:05:57.756 SYMLINK libspdk_rdma_provider.so 00:05:57.756 LIB libspdk_rdma_utils.a 00:05:57.756 CC lib/env_dpdk/init.o 00:05:57.756 SO libspdk_rdma_utils.so.1.0 00:05:57.756 CC lib/env_dpdk/threads.o 00:05:57.756 CC lib/env_dpdk/pci_ioat.o 00:05:57.756 SYMLINK libspdk_rdma_utils.so 00:05:57.756 CC lib/env_dpdk/pci_virtio.o 00:05:58.013 CC lib/env_dpdk/pci_vmd.o 00:05:58.013 LIB libspdk_json.a 00:05:58.013 CC lib/env_dpdk/pci_idxd.o 00:05:58.013 CC lib/env_dpdk/pci_event.o 00:05:58.013 SO libspdk_json.so.6.0 00:05:58.013 LIB libspdk_idxd.a 00:05:58.013 SO libspdk_idxd.so.12.0 00:05:58.013 SYMLINK libspdk_json.so 00:05:58.013 CC lib/env_dpdk/sigbus_handler.o 00:05:58.013 CC lib/env_dpdk/pci_dpdk.o 00:05:58.272 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:58.272 SYMLINK libspdk_idxd.so 00:05:58.272 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:58.529 CC lib/jsonrpc/jsonrpc_server.o 00:05:58.529 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:58.529 CC lib/jsonrpc/jsonrpc_client.o 00:05:58.529 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:58.529 LIB libspdk_vmd.a 00:05:58.529 SO libspdk_vmd.so.6.0 00:05:58.786 SYMLINK libspdk_vmd.so 00:05:58.786 LIB libspdk_jsonrpc.a 00:05:58.786 SO libspdk_jsonrpc.so.6.0 00:05:59.043 SYMLINK libspdk_jsonrpc.so 00:05:59.301 CC lib/rpc/rpc.o 00:05:59.301 LIB libspdk_env_dpdk.a 00:05:59.559 SO libspdk_env_dpdk.so.15.0 00:05:59.559 LIB libspdk_rpc.a 00:05:59.559 SO libspdk_rpc.so.6.0 00:05:59.559 SYMLINK libspdk_rpc.so 00:05:59.818 SYMLINK libspdk_env_dpdk.so 00:05:59.818 CC lib/notify/notify_rpc.o 00:05:59.818 CC lib/notify/notify.o 00:05:59.818 CC lib/trace/trace.o 00:05:59.818 CC lib/keyring/keyring.o 00:05:59.818 CC lib/trace/trace_flags.o 00:05:59.818 CC lib/keyring/keyring_rpc.o 00:05:59.818 CC lib/trace/trace_rpc.o 00:06:00.076 LIB libspdk_notify.a 00:06:00.076 LIB libspdk_keyring.a 00:06:00.076 SO libspdk_notify.so.6.0 00:06:00.076 LIB libspdk_trace.a 00:06:00.076 SO libspdk_keyring.so.1.0 00:06:00.334 SO libspdk_trace.so.10.0 00:06:00.334 SYMLINK libspdk_notify.so 00:06:00.334 SYMLINK libspdk_keyring.so 00:06:00.334 SYMLINK libspdk_trace.so 00:06:00.592 CC lib/sock/sock.o 00:06:00.592 CC lib/sock/sock_rpc.o 00:06:00.592 CC lib/thread/iobuf.o 00:06:00.592 CC lib/thread/thread.o 00:06:01.157 LIB libspdk_sock.a 00:06:01.157 SO libspdk_sock.so.10.0 00:06:01.157 SYMLINK libspdk_sock.so 00:06:01.441 CC lib/nvme/nvme_ctrlr_cmd.o 00:06:01.441 CC lib/nvme/nvme_ctrlr.o 00:06:01.441 CC lib/nvme/nvme_fabric.o 00:06:01.441 CC lib/nvme/nvme_ns_cmd.o 00:06:01.441 CC lib/nvme/nvme_ns.o 00:06:01.441 CC lib/nvme/nvme_pcie_common.o 00:06:01.441 CC lib/nvme/nvme_pcie.o 00:06:01.441 CC lib/nvme/nvme_qpair.o 00:06:01.441 CC lib/nvme/nvme.o 00:06:02.821 CC lib/nvme/nvme_quirks.o 00:06:02.821 CC lib/nvme/nvme_transport.o 00:06:02.821 CC lib/nvme/nvme_discovery.o 00:06:03.150 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:06:03.150 LIB libspdk_thread.a 00:06:03.150 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:06:03.150 CC lib/nvme/nvme_tcp.o 00:06:03.423 SO libspdk_thread.so.10.1 00:06:03.423 CC lib/nvme/nvme_opal.o 00:06:03.423 SYMLINK libspdk_thread.so 00:06:03.423 CC lib/nvme/nvme_io_msg.o 00:06:03.423 CC lib/nvme/nvme_poll_group.o 00:06:03.988 CC lib/accel/accel.o 00:06:03.988 CC lib/blob/blobstore.o 00:06:03.988 CC lib/blob/request.o 00:06:04.246 CC lib/nvme/nvme_zns.o 00:06:04.246 CC lib/nvme/nvme_stubs.o 00:06:04.246 CC lib/accel/accel_rpc.o 00:06:04.246 CC lib/nvme/nvme_auth.o 00:06:04.505 CC lib/accel/accel_sw.o 00:06:04.763 CC lib/blob/zeroes.o 00:06:04.763 CC lib/blob/blob_bs_dev.o 00:06:04.764 CC lib/nvme/nvme_cuse.o 00:06:05.021 CC lib/nvme/nvme_rdma.o 00:06:05.279 CC lib/init/json_config.o 00:06:05.279 CC lib/init/subsystem.o 00:06:05.279 CC lib/virtio/virtio.o 00:06:05.537 CC lib/init/subsystem_rpc.o 00:06:05.537 CC lib/init/rpc.o 00:06:05.537 CC lib/virtio/virtio_vhost_user.o 00:06:05.795 LIB libspdk_init.a 00:06:05.795 SO libspdk_init.so.5.0 00:06:05.795 CC lib/virtio/virtio_vfio_user.o 00:06:05.795 LIB libspdk_accel.a 00:06:05.795 SYMLINK libspdk_init.so 00:06:06.053 SO libspdk_accel.so.16.0 00:06:06.053 CC lib/virtio/virtio_pci.o 00:06:06.053 SYMLINK libspdk_accel.so 00:06:06.053 CC lib/event/app.o 00:06:06.053 CC lib/event/reactor.o 00:06:06.053 CC lib/event/log_rpc.o 00:06:06.311 CC lib/event/app_rpc.o 00:06:06.311 CC lib/event/scheduler_static.o 00:06:06.311 CC lib/bdev/bdev.o 00:06:06.311 CC lib/bdev/bdev_rpc.o 00:06:06.570 CC lib/bdev/bdev_zone.o 00:06:06.570 LIB libspdk_virtio.a 00:06:06.570 SO libspdk_virtio.so.7.0 00:06:06.829 CC lib/bdev/part.o 00:06:06.829 CC lib/bdev/scsi_nvme.o 00:06:06.829 SYMLINK libspdk_virtio.so 00:06:07.089 LIB libspdk_event.a 00:06:07.348 SO libspdk_event.so.14.0 00:06:07.348 SYMLINK libspdk_event.so 00:06:07.915 LIB libspdk_nvme.a 00:06:08.172 SO libspdk_nvme.so.13.1 00:06:08.766 SYMLINK libspdk_nvme.so 00:06:10.140 LIB libspdk_blob.a 00:06:10.398 SO libspdk_blob.so.11.0 00:06:10.398 SYMLINK libspdk_blob.so 00:06:10.657 CC lib/blobfs/blobfs.o 00:06:10.657 CC lib/lvol/lvol.o 00:06:10.657 CC lib/blobfs/tree.o 00:06:11.225 LIB libspdk_bdev.a 00:06:11.225 SO libspdk_bdev.so.16.0 00:06:11.507 SYMLINK libspdk_bdev.so 00:06:11.765 CC lib/nbd/nbd.o 00:06:11.765 CC lib/nbd/nbd_rpc.o 00:06:11.765 CC lib/nvmf/ctrlr_discovery.o 00:06:11.765 CC lib/nvmf/ctrlr.o 00:06:11.765 CC lib/nvmf/ctrlr_bdev.o 00:06:11.765 CC lib/scsi/dev.o 00:06:11.765 CC lib/ftl/ftl_core.o 00:06:11.765 CC lib/ublk/ublk.o 00:06:11.765 LIB libspdk_blobfs.a 00:06:12.024 SO libspdk_blobfs.so.10.0 00:06:12.024 CC lib/scsi/lun.o 00:06:12.024 SYMLINK libspdk_blobfs.so 00:06:12.024 CC lib/scsi/port.o 00:06:12.283 LIB libspdk_lvol.a 00:06:12.283 CC lib/ublk/ublk_rpc.o 00:06:12.283 SO libspdk_lvol.so.10.0 00:06:12.283 CC lib/ftl/ftl_init.o 00:06:12.283 SYMLINK libspdk_lvol.so 00:06:12.283 CC lib/ftl/ftl_layout.o 00:06:12.541 CC lib/ftl/ftl_debug.o 00:06:12.541 CC lib/scsi/scsi.o 00:06:12.541 CC lib/ftl/ftl_io.o 00:06:12.541 LIB libspdk_nbd.a 00:06:12.541 SO libspdk_nbd.so.7.0 00:06:12.799 CC lib/ftl/ftl_sb.o 00:06:12.799 CC lib/scsi/scsi_bdev.o 00:06:12.799 SYMLINK libspdk_nbd.so 00:06:12.799 CC lib/scsi/scsi_pr.o 00:06:12.799 CC lib/scsi/scsi_rpc.o 00:06:12.799 LIB libspdk_ublk.a 00:06:12.799 SO libspdk_ublk.so.3.0 00:06:12.799 CC lib/scsi/task.o 00:06:13.057 SYMLINK libspdk_ublk.so 00:06:13.058 CC lib/ftl/ftl_l2p.o 00:06:13.058 CC lib/ftl/ftl_l2p_flat.o 00:06:13.058 CC lib/ftl/ftl_nv_cache.o 00:06:13.058 CC lib/ftl/ftl_band.o 00:06:13.058 CC lib/nvmf/subsystem.o 00:06:13.317 CC lib/ftl/ftl_band_ops.o 00:06:13.317 CC lib/ftl/ftl_writer.o 00:06:13.317 CC lib/ftl/ftl_rq.o 00:06:13.317 CC lib/nvmf/nvmf.o 00:06:13.317 CC lib/nvmf/nvmf_rpc.o 00:06:13.882 CC lib/nvmf/transport.o 00:06:13.882 CC lib/ftl/ftl_reloc.o 00:06:13.882 LIB libspdk_scsi.a 00:06:13.882 SO libspdk_scsi.so.9.0 00:06:13.882 CC lib/nvmf/tcp.o 00:06:14.140 CC lib/ftl/ftl_l2p_cache.o 00:06:14.140 SYMLINK libspdk_scsi.so 00:06:14.140 CC lib/nvmf/stubs.o 00:06:14.140 CC lib/nvmf/mdns_server.o 00:06:14.707 CC lib/ftl/ftl_p2l.o 00:06:14.707 CC lib/ftl/mngt/ftl_mngt.o 00:06:14.965 CC lib/nvmf/rdma.o 00:06:15.224 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:06:15.224 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:06:15.224 CC lib/vhost/vhost.o 00:06:15.224 CC lib/iscsi/conn.o 00:06:15.224 CC lib/vhost/vhost_rpc.o 00:06:15.491 CC lib/nvmf/auth.o 00:06:15.491 CC lib/ftl/mngt/ftl_mngt_startup.o 00:06:15.491 CC lib/ftl/mngt/ftl_mngt_md.o 00:06:15.749 CC lib/ftl/mngt/ftl_mngt_misc.o 00:06:15.749 CC lib/iscsi/init_grp.o 00:06:16.007 CC lib/iscsi/iscsi.o 00:06:16.266 CC lib/iscsi/md5.o 00:06:16.266 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:06:16.266 CC lib/iscsi/param.o 00:06:16.266 CC lib/vhost/vhost_scsi.o 00:06:16.524 CC lib/iscsi/portal_grp.o 00:06:16.524 CC lib/iscsi/tgt_node.o 00:06:16.524 CC lib/vhost/vhost_blk.o 00:06:16.524 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:06:16.783 CC lib/vhost/rte_vhost_user.o 00:06:16.783 CC lib/ftl/mngt/ftl_mngt_band.o 00:06:17.041 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:06:17.041 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:06:17.300 CC lib/iscsi/iscsi_subsystem.o 00:06:17.300 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:06:17.300 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:06:17.557 CC lib/ftl/utils/ftl_conf.o 00:06:17.557 CC lib/ftl/utils/ftl_md.o 00:06:17.557 CC lib/ftl/utils/ftl_mempool.o 00:06:17.831 CC lib/ftl/utils/ftl_bitmap.o 00:06:17.831 CC lib/iscsi/iscsi_rpc.o 00:06:17.831 CC lib/ftl/utils/ftl_property.o 00:06:17.831 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:06:18.091 CC lib/iscsi/task.o 00:06:18.091 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:06:18.091 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:06:18.349 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:06:18.349 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:06:18.349 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:06:18.620 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:06:18.620 CC lib/ftl/upgrade/ftl_sb_v3.o 00:06:18.620 CC lib/ftl/upgrade/ftl_sb_v5.o 00:06:18.620 CC lib/ftl/nvc/ftl_nvc_dev.o 00:06:18.620 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:06:18.620 CC lib/ftl/base/ftl_base_dev.o 00:06:18.620 CC lib/ftl/base/ftl_base_bdev.o 00:06:18.880 CC lib/ftl/ftl_trace.o 00:06:18.880 LIB libspdk_vhost.a 00:06:19.139 SO libspdk_vhost.so.8.0 00:06:19.139 LIB libspdk_ftl.a 00:06:19.139 LIB libspdk_iscsi.a 00:06:19.139 SYMLINK libspdk_vhost.so 00:06:19.404 SO libspdk_iscsi.so.8.0 00:06:19.404 SO libspdk_ftl.so.9.0 00:06:19.404 SYMLINK libspdk_iscsi.so 00:06:19.970 SYMLINK libspdk_ftl.so 00:06:19.970 LIB libspdk_nvmf.a 00:06:20.228 SO libspdk_nvmf.so.19.0 00:06:20.486 SYMLINK libspdk_nvmf.so 00:06:20.744 CC module/env_dpdk/env_dpdk_rpc.o 00:06:21.003 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:06:21.003 CC module/sock/posix/posix.o 00:06:21.003 CC module/accel/dsa/accel_dsa.o 00:06:21.003 CC module/accel/iaa/accel_iaa.o 00:06:21.003 CC module/accel/error/accel_error.o 00:06:21.003 CC module/keyring/file/keyring.o 00:06:21.003 CC module/accel/ioat/accel_ioat.o 00:06:21.003 CC module/scheduler/dynamic/scheduler_dynamic.o 00:06:21.003 CC module/blob/bdev/blob_bdev.o 00:06:21.003 LIB libspdk_env_dpdk_rpc.a 00:06:21.003 SO libspdk_env_dpdk_rpc.so.6.0 00:06:21.003 SYMLINK libspdk_env_dpdk_rpc.so 00:06:21.003 CC module/keyring/file/keyring_rpc.o 00:06:21.003 LIB libspdk_scheduler_dpdk_governor.a 00:06:21.260 SO libspdk_scheduler_dpdk_governor.so.4.0 00:06:21.261 LIB libspdk_scheduler_dynamic.a 00:06:21.261 CC module/accel/error/accel_error_rpc.o 00:06:21.261 CC module/accel/iaa/accel_iaa_rpc.o 00:06:21.261 SO libspdk_scheduler_dynamic.so.4.0 00:06:21.261 LIB libspdk_keyring_file.a 00:06:21.261 CC module/accel/dsa/accel_dsa_rpc.o 00:06:21.261 SYMLINK libspdk_scheduler_dpdk_governor.so 00:06:21.261 SO libspdk_keyring_file.so.1.0 00:06:21.261 SYMLINK libspdk_scheduler_dynamic.so 00:06:21.261 CC module/scheduler/gscheduler/gscheduler.o 00:06:21.261 CC module/accel/ioat/accel_ioat_rpc.o 00:06:21.261 SYMLINK libspdk_keyring_file.so 00:06:21.261 LIB libspdk_blob_bdev.a 00:06:21.261 LIB libspdk_accel_error.a 00:06:21.519 SO libspdk_blob_bdev.so.11.0 00:06:21.519 SO libspdk_accel_error.so.2.0 00:06:21.519 LIB libspdk_accel_iaa.a 00:06:21.519 LIB libspdk_accel_dsa.a 00:06:21.519 SO libspdk_accel_iaa.so.3.0 00:06:21.519 LIB libspdk_scheduler_gscheduler.a 00:06:21.519 SYMLINK libspdk_blob_bdev.so 00:06:21.519 LIB libspdk_accel_ioat.a 00:06:21.519 SYMLINK libspdk_accel_error.so 00:06:21.519 SO libspdk_accel_dsa.so.5.0 00:06:21.519 CC module/keyring/linux/keyring.o 00:06:21.519 CC module/keyring/linux/keyring_rpc.o 00:06:21.519 SO libspdk_scheduler_gscheduler.so.4.0 00:06:21.519 SO libspdk_accel_ioat.so.6.0 00:06:21.519 SYMLINK libspdk_accel_iaa.so 00:06:21.519 SYMLINK libspdk_scheduler_gscheduler.so 00:06:21.519 SYMLINK libspdk_accel_dsa.so 00:06:21.777 SYMLINK libspdk_accel_ioat.so 00:06:21.777 LIB libspdk_keyring_linux.a 00:06:21.777 SO libspdk_keyring_linux.so.1.0 00:06:21.777 CC module/bdev/gpt/gpt.o 00:06:21.777 CC module/bdev/delay/vbdev_delay.o 00:06:21.777 CC module/blobfs/bdev/blobfs_bdev.o 00:06:21.777 CC module/bdev/lvol/vbdev_lvol.o 00:06:21.777 CC module/bdev/error/vbdev_error.o 00:06:21.777 CC module/bdev/malloc/bdev_malloc.o 00:06:21.777 CC module/bdev/null/bdev_null.o 00:06:22.037 SYMLINK libspdk_keyring_linux.so 00:06:22.037 CC module/bdev/null/bdev_null_rpc.o 00:06:22.037 CC module/bdev/nvme/bdev_nvme.o 00:06:22.037 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:06:22.037 CC module/bdev/gpt/vbdev_gpt.o 00:06:22.037 LIB libspdk_sock_posix.a 00:06:22.037 CC module/bdev/malloc/bdev_malloc_rpc.o 00:06:22.301 SO libspdk_sock_posix.so.6.0 00:06:22.301 LIB libspdk_bdev_null.a 00:06:22.301 CC module/bdev/error/vbdev_error_rpc.o 00:06:22.301 SO libspdk_bdev_null.so.6.0 00:06:22.301 LIB libspdk_blobfs_bdev.a 00:06:22.301 CC module/bdev/delay/vbdev_delay_rpc.o 00:06:22.301 SO libspdk_blobfs_bdev.so.6.0 00:06:22.301 SYMLINK libspdk_sock_posix.so 00:06:22.301 CC module/bdev/nvme/bdev_nvme_rpc.o 00:06:22.301 SYMLINK libspdk_bdev_null.so 00:06:22.301 CC module/bdev/nvme/nvme_rpc.o 00:06:22.301 CC module/bdev/nvme/bdev_mdns_client.o 00:06:22.301 LIB libspdk_bdev_malloc.a 00:06:22.301 LIB libspdk_bdev_gpt.a 00:06:22.301 SYMLINK libspdk_blobfs_bdev.so 00:06:22.301 CC module/bdev/nvme/vbdev_opal.o 00:06:22.301 SO libspdk_bdev_malloc.so.6.0 00:06:22.301 SO libspdk_bdev_gpt.so.6.0 00:06:22.566 LIB libspdk_bdev_error.a 00:06:22.566 SO libspdk_bdev_error.so.6.0 00:06:22.566 LIB libspdk_bdev_delay.a 00:06:22.566 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:06:22.566 SYMLINK libspdk_bdev_malloc.so 00:06:22.566 SYMLINK libspdk_bdev_gpt.so 00:06:22.566 SO libspdk_bdev_delay.so.6.0 00:06:22.566 CC module/bdev/nvme/vbdev_opal_rpc.o 00:06:22.566 SYMLINK libspdk_bdev_error.so 00:06:22.566 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:06:22.566 SYMLINK libspdk_bdev_delay.so 00:06:22.834 CC module/bdev/raid/bdev_raid.o 00:06:22.834 CC module/bdev/passthru/vbdev_passthru.o 00:06:22.834 CC module/bdev/raid/bdev_raid_rpc.o 00:06:22.834 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:06:22.834 CC module/bdev/split/vbdev_split.o 00:06:23.102 CC module/bdev/zone_block/vbdev_zone_block.o 00:06:23.102 CC module/bdev/xnvme/bdev_xnvme.o 00:06:23.102 LIB libspdk_bdev_lvol.a 00:06:23.102 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:06:23.102 CC module/bdev/split/vbdev_split_rpc.o 00:06:23.102 SO libspdk_bdev_lvol.so.6.0 00:06:23.102 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:06:23.102 SYMLINK libspdk_bdev_lvol.so 00:06:23.102 LIB libspdk_bdev_passthru.a 00:06:23.366 SO libspdk_bdev_passthru.so.6.0 00:06:23.366 CC module/bdev/raid/bdev_raid_sb.o 00:06:23.366 CC module/bdev/raid/raid0.o 00:06:23.366 LIB libspdk_bdev_split.a 00:06:23.366 LIB libspdk_bdev_xnvme.a 00:06:23.366 SO libspdk_bdev_split.so.6.0 00:06:23.366 SYMLINK libspdk_bdev_passthru.so 00:06:23.366 SO libspdk_bdev_xnvme.so.3.0 00:06:23.366 SYMLINK libspdk_bdev_split.so 00:06:23.366 CC module/bdev/raid/raid1.o 00:06:23.624 CC module/bdev/aio/bdev_aio.o 00:06:23.624 LIB libspdk_bdev_zone_block.a 00:06:23.624 SYMLINK libspdk_bdev_xnvme.so 00:06:23.624 SO libspdk_bdev_zone_block.so.6.0 00:06:23.624 SYMLINK libspdk_bdev_zone_block.so 00:06:23.624 CC module/bdev/raid/concat.o 00:06:23.624 CC module/bdev/aio/bdev_aio_rpc.o 00:06:23.624 CC module/bdev/ftl/bdev_ftl.o 00:06:23.624 CC module/bdev/iscsi/bdev_iscsi.o 00:06:23.624 CC module/bdev/ftl/bdev_ftl_rpc.o 00:06:23.883 CC module/bdev/virtio/bdev_virtio_scsi.o 00:06:23.883 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:06:23.883 CC module/bdev/virtio/bdev_virtio_blk.o 00:06:24.141 CC module/bdev/virtio/bdev_virtio_rpc.o 00:06:24.141 LIB libspdk_bdev_aio.a 00:06:24.141 SO libspdk_bdev_aio.so.6.0 00:06:24.141 SYMLINK libspdk_bdev_aio.so 00:06:24.141 LIB libspdk_bdev_iscsi.a 00:06:24.399 LIB libspdk_bdev_ftl.a 00:06:24.399 SO libspdk_bdev_iscsi.so.6.0 00:06:24.399 SO libspdk_bdev_ftl.so.6.0 00:06:24.399 SYMLINK libspdk_bdev_ftl.so 00:06:24.399 SYMLINK libspdk_bdev_iscsi.so 00:06:24.399 LIB libspdk_bdev_raid.a 00:06:24.399 SO libspdk_bdev_raid.so.6.0 00:06:24.657 SYMLINK libspdk_bdev_raid.so 00:06:24.915 LIB libspdk_bdev_virtio.a 00:06:24.915 SO libspdk_bdev_virtio.so.6.0 00:06:24.915 SYMLINK libspdk_bdev_virtio.so 00:06:25.852 LIB libspdk_bdev_nvme.a 00:06:25.852 SO libspdk_bdev_nvme.so.7.0 00:06:26.110 SYMLINK libspdk_bdev_nvme.so 00:06:26.677 CC module/event/subsystems/sock/sock.o 00:06:26.677 CC module/event/subsystems/vmd/vmd.o 00:06:26.677 CC module/event/subsystems/vmd/vmd_rpc.o 00:06:26.677 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:06:26.677 CC module/event/subsystems/iobuf/iobuf.o 00:06:26.677 CC module/event/subsystems/keyring/keyring.o 00:06:26.677 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:06:26.677 CC module/event/subsystems/scheduler/scheduler.o 00:06:26.677 LIB libspdk_event_scheduler.a 00:06:26.677 LIB libspdk_event_vhost_blk.a 00:06:26.677 LIB libspdk_event_sock.a 00:06:26.677 LIB libspdk_event_keyring.a 00:06:26.935 SO libspdk_event_scheduler.so.4.0 00:06:26.935 LIB libspdk_event_vmd.a 00:06:26.935 SO libspdk_event_vhost_blk.so.3.0 00:06:26.935 SO libspdk_event_sock.so.5.0 00:06:26.935 SO libspdk_event_keyring.so.1.0 00:06:26.935 LIB libspdk_event_iobuf.a 00:06:26.935 SO libspdk_event_vmd.so.6.0 00:06:26.935 SO libspdk_event_iobuf.so.3.0 00:06:26.935 SYMLINK libspdk_event_keyring.so 00:06:26.935 SYMLINK libspdk_event_scheduler.so 00:06:26.935 SYMLINK libspdk_event_sock.so 00:06:26.935 SYMLINK libspdk_event_vhost_blk.so 00:06:26.935 SYMLINK libspdk_event_iobuf.so 00:06:26.935 SYMLINK libspdk_event_vmd.so 00:06:27.193 CC module/event/subsystems/accel/accel.o 00:06:27.451 LIB libspdk_event_accel.a 00:06:27.451 SO libspdk_event_accel.so.6.0 00:06:27.451 SYMLINK libspdk_event_accel.so 00:06:27.709 CC module/event/subsystems/bdev/bdev.o 00:06:27.967 LIB libspdk_event_bdev.a 00:06:27.967 SO libspdk_event_bdev.so.6.0 00:06:28.225 SYMLINK libspdk_event_bdev.so 00:06:28.483 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:28.483 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:28.483 CC module/event/subsystems/ublk/ublk.o 00:06:28.483 CC module/event/subsystems/nbd/nbd.o 00:06:28.483 CC module/event/subsystems/scsi/scsi.o 00:06:28.483 LIB libspdk_event_ublk.a 00:06:28.483 LIB libspdk_event_nbd.a 00:06:28.483 SO libspdk_event_ublk.so.3.0 00:06:28.483 SO libspdk_event_nbd.so.6.0 00:06:28.740 LIB libspdk_event_scsi.a 00:06:28.740 SYMLINK libspdk_event_nbd.so 00:06:28.740 SO libspdk_event_scsi.so.6.0 00:06:28.740 SYMLINK libspdk_event_ublk.so 00:06:28.740 LIB libspdk_event_nvmf.a 00:06:28.740 SO libspdk_event_nvmf.so.6.0 00:06:28.740 SYMLINK libspdk_event_scsi.so 00:06:28.740 SYMLINK libspdk_event_nvmf.so 00:06:28.998 CC module/event/subsystems/iscsi/iscsi.o 00:06:28.998 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:29.256 LIB libspdk_event_iscsi.a 00:06:29.256 LIB libspdk_event_vhost_scsi.a 00:06:29.256 SO libspdk_event_iscsi.so.6.0 00:06:29.256 SO libspdk_event_vhost_scsi.so.3.0 00:06:29.256 SYMLINK libspdk_event_vhost_scsi.so 00:06:29.256 SYMLINK libspdk_event_iscsi.so 00:06:29.515 SO libspdk.so.6.0 00:06:29.515 SYMLINK libspdk.so 00:06:29.772 CC app/trace_record/trace_record.o 00:06:29.772 CXX app/trace/trace.o 00:06:29.772 CC app/nvmf_tgt/nvmf_main.o 00:06:29.772 CC app/iscsi_tgt/iscsi_tgt.o 00:06:29.772 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:29.772 CC examples/util/zipf/zipf.o 00:06:29.772 CC app/spdk_tgt/spdk_tgt.o 00:06:30.030 CC examples/ioat/perf/perf.o 00:06:30.030 CC test/thread/poller_perf/poller_perf.o 00:06:30.030 CC test/dma/test_dma/test_dma.o 00:06:30.286 LINK iscsi_tgt 00:06:30.286 LINK interrupt_tgt 00:06:30.286 LINK zipf 00:06:30.286 LINK poller_perf 00:06:30.286 LINK nvmf_tgt 00:06:30.286 LINK spdk_tgt 00:06:30.286 LINK ioat_perf 00:06:30.286 LINK spdk_trace_record 00:06:30.543 LINK spdk_trace 00:06:30.801 CC examples/ioat/verify/verify.o 00:06:30.801 CC app/spdk_lspci/spdk_lspci.o 00:06:30.801 TEST_HEADER include/spdk/accel.h 00:06:30.801 TEST_HEADER include/spdk/accel_module.h 00:06:30.801 TEST_HEADER include/spdk/assert.h 00:06:30.801 TEST_HEADER include/spdk/barrier.h 00:06:30.801 TEST_HEADER include/spdk/base64.h 00:06:30.801 TEST_HEADER include/spdk/bdev.h 00:06:30.801 TEST_HEADER include/spdk/bdev_module.h 00:06:30.801 LINK test_dma 00:06:30.801 TEST_HEADER include/spdk/bdev_zone.h 00:06:30.801 TEST_HEADER include/spdk/bit_array.h 00:06:30.801 TEST_HEADER include/spdk/bit_pool.h 00:06:30.801 CC app/spdk_nvme_perf/perf.o 00:06:30.801 TEST_HEADER include/spdk/blob_bdev.h 00:06:30.801 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:30.801 TEST_HEADER include/spdk/blobfs.h 00:06:30.801 TEST_HEADER include/spdk/blob.h 00:06:30.801 TEST_HEADER include/spdk/conf.h 00:06:30.801 TEST_HEADER include/spdk/config.h 00:06:30.801 TEST_HEADER include/spdk/cpuset.h 00:06:30.801 TEST_HEADER include/spdk/crc16.h 00:06:30.801 TEST_HEADER include/spdk/crc32.h 00:06:30.801 TEST_HEADER include/spdk/crc64.h 00:06:30.801 TEST_HEADER include/spdk/dif.h 00:06:30.801 TEST_HEADER include/spdk/dma.h 00:06:30.801 TEST_HEADER include/spdk/endian.h 00:06:30.801 TEST_HEADER include/spdk/env_dpdk.h 00:06:30.801 TEST_HEADER include/spdk/env.h 00:06:30.801 TEST_HEADER include/spdk/event.h 00:06:30.801 TEST_HEADER include/spdk/fd_group.h 00:06:30.801 TEST_HEADER include/spdk/fd.h 00:06:30.801 TEST_HEADER include/spdk/file.h 00:06:30.801 TEST_HEADER include/spdk/ftl.h 00:06:30.801 TEST_HEADER include/spdk/gpt_spec.h 00:06:30.801 TEST_HEADER include/spdk/hexlify.h 00:06:30.801 TEST_HEADER include/spdk/histogram_data.h 00:06:30.801 CC examples/sock/hello_world/hello_sock.o 00:06:30.801 TEST_HEADER include/spdk/idxd.h 00:06:30.801 TEST_HEADER include/spdk/idxd_spec.h 00:06:30.801 CC examples/thread/thread/thread_ex.o 00:06:30.801 TEST_HEADER include/spdk/init.h 00:06:30.801 TEST_HEADER include/spdk/ioat.h 00:06:30.801 TEST_HEADER include/spdk/ioat_spec.h 00:06:30.801 TEST_HEADER include/spdk/iscsi_spec.h 00:06:30.801 TEST_HEADER include/spdk/json.h 00:06:30.801 TEST_HEADER include/spdk/jsonrpc.h 00:06:30.801 CC test/app/bdev_svc/bdev_svc.o 00:06:30.801 TEST_HEADER include/spdk/keyring.h 00:06:30.801 TEST_HEADER include/spdk/keyring_module.h 00:06:30.801 TEST_HEADER include/spdk/likely.h 00:06:30.801 CC examples/vmd/lsvmd/lsvmd.o 00:06:31.059 TEST_HEADER include/spdk/log.h 00:06:31.059 TEST_HEADER include/spdk/lvol.h 00:06:31.059 TEST_HEADER include/spdk/memory.h 00:06:31.059 TEST_HEADER include/spdk/mmio.h 00:06:31.059 TEST_HEADER include/spdk/nbd.h 00:06:31.059 TEST_HEADER include/spdk/net.h 00:06:31.059 TEST_HEADER include/spdk/notify.h 00:06:31.059 TEST_HEADER include/spdk/nvme.h 00:06:31.059 TEST_HEADER include/spdk/nvme_intel.h 00:06:31.059 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:31.059 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:31.059 TEST_HEADER include/spdk/nvme_spec.h 00:06:31.059 TEST_HEADER include/spdk/nvme_zns.h 00:06:31.059 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:31.059 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:31.059 TEST_HEADER include/spdk/nvmf.h 00:06:31.059 TEST_HEADER include/spdk/nvmf_spec.h 00:06:31.059 TEST_HEADER include/spdk/nvmf_transport.h 00:06:31.059 TEST_HEADER include/spdk/opal.h 00:06:31.059 TEST_HEADER include/spdk/opal_spec.h 00:06:31.059 TEST_HEADER include/spdk/pci_ids.h 00:06:31.059 TEST_HEADER include/spdk/pipe.h 00:06:31.059 TEST_HEADER include/spdk/queue.h 00:06:31.059 TEST_HEADER include/spdk/reduce.h 00:06:31.059 TEST_HEADER include/spdk/rpc.h 00:06:31.059 TEST_HEADER include/spdk/scheduler.h 00:06:31.059 LINK spdk_lspci 00:06:31.059 TEST_HEADER include/spdk/scsi.h 00:06:31.059 TEST_HEADER include/spdk/scsi_spec.h 00:06:31.059 CC examples/idxd/perf/perf.o 00:06:31.059 TEST_HEADER include/spdk/sock.h 00:06:31.059 TEST_HEADER include/spdk/stdinc.h 00:06:31.059 TEST_HEADER include/spdk/string.h 00:06:31.059 TEST_HEADER include/spdk/thread.h 00:06:31.059 TEST_HEADER include/spdk/trace.h 00:06:31.059 TEST_HEADER include/spdk/trace_parser.h 00:06:31.059 TEST_HEADER include/spdk/tree.h 00:06:31.059 TEST_HEADER include/spdk/ublk.h 00:06:31.059 TEST_HEADER include/spdk/util.h 00:06:31.059 TEST_HEADER include/spdk/uuid.h 00:06:31.059 TEST_HEADER include/spdk/version.h 00:06:31.059 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:31.059 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:31.059 TEST_HEADER include/spdk/vhost.h 00:06:31.059 TEST_HEADER include/spdk/vmd.h 00:06:31.059 TEST_HEADER include/spdk/xor.h 00:06:31.059 TEST_HEADER include/spdk/zipf.h 00:06:31.059 CXX test/cpp_headers/accel.o 00:06:31.317 LINK lsvmd 00:06:31.317 LINK verify 00:06:31.317 LINK bdev_svc 00:06:31.317 LINK hello_sock 00:06:31.576 LINK thread 00:06:31.576 CXX test/cpp_headers/accel_module.o 00:06:31.576 CC test/event/event_perf/event_perf.o 00:06:31.576 LINK idxd_perf 00:06:31.576 CC examples/vmd/led/led.o 00:06:31.576 CC test/env/mem_callbacks/mem_callbacks.o 00:06:31.834 CC test/event/reactor/reactor.o 00:06:31.834 CC test/event/reactor_perf/reactor_perf.o 00:06:31.834 CXX test/cpp_headers/assert.o 00:06:31.834 LINK event_perf 00:06:31.834 LINK reactor_perf 00:06:32.091 LINK led 00:06:32.091 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:32.091 LINK reactor 00:06:32.091 CC test/app/histogram_perf/histogram_perf.o 00:06:32.091 CXX test/cpp_headers/barrier.o 00:06:32.091 CC test/app/jsoncat/jsoncat.o 00:06:32.349 CC test/app/stub/stub.o 00:06:32.349 LINK histogram_perf 00:06:32.349 CXX test/cpp_headers/base64.o 00:06:32.349 LINK jsoncat 00:06:32.349 CC test/event/app_repeat/app_repeat.o 00:06:32.607 CC test/nvme/aer/aer.o 00:06:32.607 LINK stub 00:06:32.607 CC examples/nvme/hello_world/hello_world.o 00:06:32.607 CXX test/cpp_headers/bdev.o 00:06:32.607 LINK spdk_nvme_perf 00:06:32.865 LINK app_repeat 00:06:32.865 CC examples/nvme/reconnect/reconnect.o 00:06:32.865 LINK mem_callbacks 00:06:32.865 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:32.865 LINK nvme_fuzz 00:06:33.123 CXX test/cpp_headers/bdev_module.o 00:06:33.123 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:33.123 LINK hello_world 00:06:33.123 LINK aer 00:06:33.123 CC app/spdk_nvme_identify/identify.o 00:06:33.381 CC test/env/vtophys/vtophys.o 00:06:33.381 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:33.381 CC test/event/scheduler/scheduler.o 00:06:33.381 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:33.381 LINK reconnect 00:06:33.381 CXX test/cpp_headers/bdev_zone.o 00:06:33.638 LINK vtophys 00:06:33.638 CC test/nvme/reset/reset.o 00:06:33.638 CC app/spdk_nvme_discover/discovery_aer.o 00:06:33.638 CXX test/cpp_headers/bit_array.o 00:06:33.638 LINK env_dpdk_post_init 00:06:33.638 CXX test/cpp_headers/bit_pool.o 00:06:33.638 LINK scheduler 00:06:33.896 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:33.896 LINK spdk_nvme_discover 00:06:34.154 CXX test/cpp_headers/blob_bdev.o 00:06:34.154 LINK reset 00:06:34.154 CC test/env/memory/memory_ut.o 00:06:34.154 CC test/nvme/sgl/sgl.o 00:06:34.154 LINK vhost_fuzz 00:06:34.154 CC test/nvme/e2edp/nvme_dp.o 00:06:34.413 CXX test/cpp_headers/blobfs_bdev.o 00:06:34.413 CC test/nvme/err_injection/err_injection.o 00:06:34.413 CC test/nvme/startup/startup.o 00:06:34.413 CC test/nvme/overhead/overhead.o 00:06:34.671 CXX test/cpp_headers/blobfs.o 00:06:34.671 LINK sgl 00:06:34.671 LINK spdk_nvme_identify 00:06:34.671 LINK err_injection 00:06:34.671 LINK nvme_dp 00:06:34.929 LINK startup 00:06:34.929 CXX test/cpp_headers/blob.o 00:06:34.929 LINK overhead 00:06:34.929 LINK nvme_manage 00:06:35.187 CXX test/cpp_headers/conf.o 00:06:35.187 CC app/spdk_top/spdk_top.o 00:06:35.187 CC test/nvme/reserve/reserve.o 00:06:35.187 CC test/nvme/simple_copy/simple_copy.o 00:06:35.187 CC test/nvme/connect_stress/connect_stress.o 00:06:35.187 CC test/nvme/boot_partition/boot_partition.o 00:06:35.444 CC app/vhost/vhost.o 00:06:35.444 CC examples/nvme/arbitration/arbitration.o 00:06:35.444 CXX test/cpp_headers/config.o 00:06:35.444 CXX test/cpp_headers/cpuset.o 00:06:35.702 LINK simple_copy 00:06:35.702 LINK boot_partition 00:06:35.702 LINK reserve 00:06:35.702 LINK connect_stress 00:06:35.702 CXX test/cpp_headers/crc16.o 00:06:35.702 LINK vhost 00:06:35.960 CXX test/cpp_headers/crc32.o 00:06:35.960 CC test/nvme/compliance/nvme_compliance.o 00:06:35.960 CC test/nvme/fused_ordering/fused_ordering.o 00:06:36.218 CC test/nvme/fdp/fdp.o 00:06:36.218 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:36.219 LINK arbitration 00:06:36.219 CXX test/cpp_headers/crc64.o 00:06:36.219 CC test/nvme/cuse/cuse.o 00:06:36.477 LINK memory_ut 00:06:36.477 LINK doorbell_aers 00:06:36.477 LINK fused_ordering 00:06:36.477 LINK nvme_compliance 00:06:36.477 CC examples/nvme/hotplug/hotplug.o 00:06:36.477 CXX test/cpp_headers/dif.o 00:06:36.736 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:36.736 LINK spdk_top 00:06:36.736 LINK iscsi_fuzz 00:06:36.736 LINK fdp 00:06:36.736 CC examples/nvme/abort/abort.o 00:06:36.995 LINK hotplug 00:06:36.995 CC test/env/pci/pci_ut.o 00:06:36.995 CXX test/cpp_headers/dma.o 00:06:36.995 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:36.995 LINK cmb_copy 00:06:36.995 CC app/spdk_dd/spdk_dd.o 00:06:36.995 CC test/rpc_client/rpc_client_test.o 00:06:37.254 CXX test/cpp_headers/endian.o 00:06:37.254 LINK pmr_persistence 00:06:37.254 CC app/fio/nvme/fio_plugin.o 00:06:37.254 CC test/accel/dif/dif.o 00:06:37.254 LINK abort 00:06:37.511 LINK pci_ut 00:06:37.511 CXX test/cpp_headers/env_dpdk.o 00:06:37.511 LINK rpc_client_test 00:06:37.511 CC app/fio/bdev/fio_plugin.o 00:06:37.511 CXX test/cpp_headers/env.o 00:06:37.511 LINK spdk_dd 00:06:37.511 CXX test/cpp_headers/event.o 00:06:37.824 CXX test/cpp_headers/fd_group.o 00:06:37.824 CXX test/cpp_headers/fd.o 00:06:38.081 CXX test/cpp_headers/file.o 00:06:38.081 CC examples/accel/perf/accel_perf.o 00:06:38.081 LINK dif 00:06:38.081 CC test/blobfs/mkfs/mkfs.o 00:06:38.081 CXX test/cpp_headers/ftl.o 00:06:38.081 LINK spdk_nvme 00:06:38.081 CC test/lvol/esnap/esnap.o 00:06:38.339 LINK spdk_bdev 00:06:38.339 CXX test/cpp_headers/gpt_spec.o 00:06:38.339 LINK mkfs 00:06:38.339 CXX test/cpp_headers/hexlify.o 00:06:38.608 CC examples/blob/hello_world/hello_blob.o 00:06:38.608 CC examples/blob/cli/blobcli.o 00:06:38.608 CXX test/cpp_headers/histogram_data.o 00:06:38.608 CXX test/cpp_headers/idxd.o 00:06:38.608 CXX test/cpp_headers/idxd_spec.o 00:06:38.608 LINK accel_perf 00:06:38.608 CXX test/cpp_headers/init.o 00:06:38.868 CXX test/cpp_headers/ioat.o 00:06:38.868 LINK hello_blob 00:06:38.868 CXX test/cpp_headers/ioat_spec.o 00:06:38.868 CXX test/cpp_headers/iscsi_spec.o 00:06:38.868 CC test/bdev/bdevio/bdevio.o 00:06:38.868 LINK cuse 00:06:38.868 CXX test/cpp_headers/json.o 00:06:39.126 CXX test/cpp_headers/jsonrpc.o 00:06:39.126 CXX test/cpp_headers/keyring.o 00:06:39.126 CXX test/cpp_headers/keyring_module.o 00:06:39.126 CXX test/cpp_headers/likely.o 00:06:39.126 CXX test/cpp_headers/log.o 00:06:39.384 CC examples/bdev/hello_world/hello_bdev.o 00:06:39.384 CXX test/cpp_headers/lvol.o 00:06:39.384 CC examples/bdev/bdevperf/bdevperf.o 00:06:39.384 CXX test/cpp_headers/memory.o 00:06:39.384 LINK bdevio 00:06:39.384 CXX test/cpp_headers/mmio.o 00:06:39.384 CXX test/cpp_headers/nbd.o 00:06:39.384 CXX test/cpp_headers/net.o 00:06:39.384 LINK blobcli 00:06:39.384 CXX test/cpp_headers/notify.o 00:06:39.384 CXX test/cpp_headers/nvme.o 00:06:39.644 CXX test/cpp_headers/nvme_intel.o 00:06:39.644 CXX test/cpp_headers/nvme_ocssd.o 00:06:39.644 LINK hello_bdev 00:06:39.644 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:39.644 CXX test/cpp_headers/nvme_spec.o 00:06:39.644 CXX test/cpp_headers/nvme_zns.o 00:06:39.902 CXX test/cpp_headers/nvmf_cmd.o 00:06:39.902 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:39.902 CXX test/cpp_headers/nvmf.o 00:06:39.902 CXX test/cpp_headers/nvmf_spec.o 00:06:39.902 CXX test/cpp_headers/nvmf_transport.o 00:06:39.902 CXX test/cpp_headers/opal.o 00:06:39.902 CXX test/cpp_headers/opal_spec.o 00:06:40.160 CXX test/cpp_headers/pci_ids.o 00:06:40.160 CXX test/cpp_headers/pipe.o 00:06:40.160 CXX test/cpp_headers/queue.o 00:06:40.160 CXX test/cpp_headers/reduce.o 00:06:40.160 CXX test/cpp_headers/rpc.o 00:06:40.160 CXX test/cpp_headers/scheduler.o 00:06:40.160 CXX test/cpp_headers/scsi.o 00:06:40.160 CXX test/cpp_headers/scsi_spec.o 00:06:40.160 CXX test/cpp_headers/sock.o 00:06:40.160 CXX test/cpp_headers/stdinc.o 00:06:40.419 CXX test/cpp_headers/string.o 00:06:40.419 CXX test/cpp_headers/thread.o 00:06:40.419 CXX test/cpp_headers/trace.o 00:06:40.419 CXX test/cpp_headers/trace_parser.o 00:06:40.419 CXX test/cpp_headers/tree.o 00:06:40.419 CXX test/cpp_headers/ublk.o 00:06:40.419 CXX test/cpp_headers/util.o 00:06:40.419 CXX test/cpp_headers/uuid.o 00:06:40.419 CXX test/cpp_headers/version.o 00:06:40.419 LINK bdevperf 00:06:40.419 CXX test/cpp_headers/vfio_user_pci.o 00:06:40.678 CXX test/cpp_headers/vfio_user_spec.o 00:06:40.678 CXX test/cpp_headers/vhost.o 00:06:40.678 CXX test/cpp_headers/vmd.o 00:06:40.678 CXX test/cpp_headers/xor.o 00:06:40.678 CXX test/cpp_headers/zipf.o 00:06:41.246 CC examples/nvmf/nvmf/nvmf.o 00:06:41.525 LINK nvmf 00:06:46.798 LINK esnap 00:06:47.057 00:06:47.057 real 1m55.973s 00:06:47.057 user 11m41.010s 00:06:47.057 sys 2m8.402s 00:06:47.057 03:35:01 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:06:47.057 ************************************ 00:06:47.057 END TEST make 00:06:47.057 ************************************ 00:06:47.057 03:35:01 make -- common/autotest_common.sh@10 -- $ set +x 00:06:47.057 03:35:01 -- common/autotest_common.sh@1142 -- $ return 0 00:06:47.057 03:35:01 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:47.057 03:35:01 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:47.057 03:35:01 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:47.057 03:35:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:47.057 03:35:01 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:06:47.057 03:35:01 -- pm/common@44 -- $ pid=5237 00:06:47.057 03:35:01 -- pm/common@50 -- $ kill -TERM 5237 00:06:47.057 03:35:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:47.057 03:35:01 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:06:47.057 03:35:01 -- pm/common@44 -- $ pid=5239 00:06:47.057 03:35:01 -- pm/common@50 -- $ kill -TERM 5239 00:06:47.315 03:35:02 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:47.315 03:35:02 -- nvmf/common.sh@7 -- # uname -s 00:06:47.315 03:35:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:47.315 03:35:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:47.315 03:35:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:47.315 03:35:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:47.315 03:35:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:47.315 03:35:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:47.315 03:35:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:47.315 03:35:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:47.315 03:35:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:47.315 03:35:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:47.316 03:35:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6a2b635f-f6bf-4f3f-b455-5414d1e9f6aa 00:06:47.316 03:35:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=6a2b635f-f6bf-4f3f-b455-5414d1e9f6aa 00:06:47.316 03:35:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:47.316 03:35:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:47.316 03:35:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:47.316 03:35:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:47.316 03:35:02 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:47.316 03:35:02 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:47.316 03:35:02 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:47.316 03:35:02 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:47.316 03:35:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.316 03:35:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.316 03:35:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.316 03:35:02 -- paths/export.sh@5 -- # export PATH 00:06:47.316 03:35:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.316 03:35:02 -- nvmf/common.sh@47 -- # : 0 00:06:47.316 03:35:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:47.316 03:35:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:47.316 03:35:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:47.316 03:35:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:47.316 03:35:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:47.316 03:35:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:47.316 03:35:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:47.316 03:35:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:47.316 03:35:02 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:47.316 03:35:02 -- spdk/autotest.sh@32 -- # uname -s 00:06:47.316 03:35:02 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:47.316 03:35:02 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:47.316 03:35:02 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:47.316 03:35:02 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:06:47.316 03:35:02 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:47.316 03:35:02 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:47.316 03:35:02 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:47.316 03:35:02 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:47.316 03:35:02 -- spdk/autotest.sh@48 -- # udevadm_pid=54202 00:06:47.316 03:35:02 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:47.316 03:35:02 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:47.316 03:35:02 -- pm/common@17 -- # local monitor 00:06:47.316 03:35:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:47.316 03:35:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:47.316 03:35:02 -- pm/common@25 -- # sleep 1 00:06:47.316 03:35:02 -- pm/common@21 -- # date +%s 00:06:47.316 03:35:02 -- pm/common@21 -- # date +%s 00:06:47.316 03:35:02 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721964902 00:06:47.316 03:35:02 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721964902 00:06:47.316 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721964902_collect-vmstat.pm.log 00:06:47.316 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721964902_collect-cpu-load.pm.log 00:06:48.251 03:35:03 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:48.251 03:35:03 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:48.251 03:35:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:48.252 03:35:03 -- common/autotest_common.sh@10 -- # set +x 00:06:48.252 03:35:03 -- spdk/autotest.sh@59 -- # create_test_list 00:06:48.252 03:35:03 -- common/autotest_common.sh@746 -- # xtrace_disable 00:06:48.252 03:35:03 -- common/autotest_common.sh@10 -- # set +x 00:06:48.252 03:35:03 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:06:48.509 03:35:03 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:06:48.509 03:35:03 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:06:48.509 03:35:03 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:06:48.509 03:35:03 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:06:48.509 03:35:03 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:48.509 03:35:03 -- common/autotest_common.sh@1455 -- # uname 00:06:48.509 03:35:03 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:06:48.509 03:35:03 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:48.509 03:35:03 -- common/autotest_common.sh@1475 -- # uname 00:06:48.509 03:35:03 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:06:48.509 03:35:03 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:06:48.509 03:35:03 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:06:48.509 03:35:03 -- spdk/autotest.sh@72 -- # hash lcov 00:06:48.509 03:35:03 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:06:48.509 03:35:03 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:06:48.509 --rc lcov_branch_coverage=1 00:06:48.509 --rc lcov_function_coverage=1 00:06:48.509 --rc genhtml_branch_coverage=1 00:06:48.509 --rc genhtml_function_coverage=1 00:06:48.509 --rc genhtml_legend=1 00:06:48.509 --rc geninfo_all_blocks=1 00:06:48.509 ' 00:06:48.509 03:35:03 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:06:48.509 --rc lcov_branch_coverage=1 00:06:48.509 --rc lcov_function_coverage=1 00:06:48.509 --rc genhtml_branch_coverage=1 00:06:48.509 --rc genhtml_function_coverage=1 00:06:48.509 --rc genhtml_legend=1 00:06:48.509 --rc geninfo_all_blocks=1 00:06:48.509 ' 00:06:48.509 03:35:03 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:06:48.509 --rc lcov_branch_coverage=1 00:06:48.509 --rc lcov_function_coverage=1 00:06:48.509 --rc genhtml_branch_coverage=1 00:06:48.509 --rc genhtml_function_coverage=1 00:06:48.509 --rc genhtml_legend=1 00:06:48.509 --rc geninfo_all_blocks=1 00:06:48.509 --no-external' 00:06:48.509 03:35:03 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:06:48.509 --rc lcov_branch_coverage=1 00:06:48.509 --rc lcov_function_coverage=1 00:06:48.509 --rc genhtml_branch_coverage=1 00:06:48.509 --rc genhtml_function_coverage=1 00:06:48.509 --rc genhtml_legend=1 00:06:48.509 --rc geninfo_all_blocks=1 00:06:48.509 --no-external' 00:06:48.509 03:35:03 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:06:48.509 lcov: LCOV version 1.14 00:06:48.509 03:35:03 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:07:06.639 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:07:06.639 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:07:21.513 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:07:21.513 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:07:21.513 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:07:21.513 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:07:21.513 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:07:21.513 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:07:21.513 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:07:21.513 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:07:21.513 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:07:21.513 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:07:21.513 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:07:21.513 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:07:21.513 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:07:21.513 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:07:21.513 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:07:21.513 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:07:21.513 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:07:21.513 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:07:21.513 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:07:21.513 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:07:21.513 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:07:21.513 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:07:21.513 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:07:21.513 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:07:21.513 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:07:21.513 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:07:21.513 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:07:21.513 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:07:21.513 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:07:21.513 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:07:21.513 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:07:21.513 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:07:21.513 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:07:21.513 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:07:21.513 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:07:21.513 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:07:21.513 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:07:21.513 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:07:21.513 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:07:21.513 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:07:21.513 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:07:21.513 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:07:21.513 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:07:21.513 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:07:21.513 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:07:21.513 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:07:21.513 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:07:21.513 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:07:21.513 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:07:21.513 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:07:21.513 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:07:21.513 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:07:21.513 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:07:21.513 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:07:21.513 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:07:21.513 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:07:21.513 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:07:21.513 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:07:21.513 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:07:21.513 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:07:21.513 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:07:21.513 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:07:21.513 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:07:21.513 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:07:21.513 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:07:21.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:07:21.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:07:21.515 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:07:21.515 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:07:21.515 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:07:21.515 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:07:21.515 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:07:21.515 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:07:21.515 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:07:21.515 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:07:21.515 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:07:21.515 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:07:21.515 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:07:21.515 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:07:21.515 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:07:24.046 03:35:38 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:07:24.046 03:35:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:24.046 03:35:38 -- common/autotest_common.sh@10 -- # set +x 00:07:24.046 03:35:38 -- spdk/autotest.sh@91 -- # rm -f 00:07:24.046 03:35:38 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:24.304 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:24.872 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:07:24.872 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:07:24.872 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:07:24.872 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:07:24.872 03:35:39 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:07:24.872 03:35:39 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:07:24.872 03:35:39 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:07:24.872 03:35:39 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:07:24.872 03:35:39 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:24.872 03:35:39 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:07:24.872 03:35:39 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:07:24.872 03:35:39 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:24.872 03:35:39 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:24.872 03:35:39 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:24.872 03:35:39 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:07:24.872 03:35:39 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:07:24.872 03:35:39 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:24.872 03:35:39 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:24.872 03:35:39 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:24.872 03:35:39 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:07:24.872 03:35:39 -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:07:24.872 03:35:39 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:24.872 03:35:39 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:24.872 03:35:39 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:24.872 03:35:39 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:07:24.872 03:35:39 -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:07:24.872 03:35:39 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:24.872 03:35:39 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:24.872 03:35:39 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:24.872 03:35:39 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:07:24.872 03:35:39 -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:07:24.872 03:35:39 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:24.872 03:35:39 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:24.872 03:35:39 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:24.872 03:35:39 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:07:24.872 03:35:39 -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:07:24.872 03:35:39 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:24.872 03:35:39 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:24.872 03:35:39 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:24.872 03:35:39 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:07:24.872 03:35:39 -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:07:24.872 03:35:39 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:24.872 03:35:39 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:24.872 03:35:39 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:07:24.872 03:35:39 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:07:24.872 03:35:39 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:07:24.872 03:35:39 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:07:24.872 03:35:39 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:07:24.872 03:35:39 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:07:25.132 No valid GPT data, bailing 00:07:25.132 03:35:39 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:25.132 03:35:39 -- scripts/common.sh@391 -- # pt= 00:07:25.132 03:35:39 -- scripts/common.sh@392 -- # return 1 00:07:25.132 03:35:39 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:07:25.132 1+0 records in 00:07:25.132 1+0 records out 00:07:25.132 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127127 s, 82.5 MB/s 00:07:25.132 03:35:39 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:07:25.132 03:35:39 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:07:25.132 03:35:39 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:07:25.132 03:35:39 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:07:25.132 03:35:39 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:07:25.132 No valid GPT data, bailing 00:07:25.132 03:35:39 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:07:25.132 03:35:39 -- scripts/common.sh@391 -- # pt= 00:07:25.132 03:35:39 -- scripts/common.sh@392 -- # return 1 00:07:25.132 03:35:39 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:07:25.132 1+0 records in 00:07:25.132 1+0 records out 00:07:25.132 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00393226 s, 267 MB/s 00:07:25.132 03:35:39 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:07:25.132 03:35:39 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:07:25.132 03:35:39 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:07:25.132 03:35:39 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:07:25.132 03:35:39 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:07:25.132 No valid GPT data, bailing 00:07:25.132 03:35:39 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:07:25.132 03:35:39 -- scripts/common.sh@391 -- # pt= 00:07:25.132 03:35:40 -- scripts/common.sh@392 -- # return 1 00:07:25.132 03:35:40 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:07:25.132 1+0 records in 00:07:25.132 1+0 records out 00:07:25.132 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00347484 s, 302 MB/s 00:07:25.132 03:35:40 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:07:25.132 03:35:40 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:07:25.132 03:35:40 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n2 00:07:25.132 03:35:40 -- scripts/common.sh@378 -- # local block=/dev/nvme2n2 pt 00:07:25.132 03:35:40 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:07:25.397 No valid GPT data, bailing 00:07:25.397 03:35:40 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:07:25.397 03:35:40 -- scripts/common.sh@391 -- # pt= 00:07:25.397 03:35:40 -- scripts/common.sh@392 -- # return 1 00:07:25.397 03:35:40 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:07:25.397 1+0 records in 00:07:25.397 1+0 records out 00:07:25.397 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00380434 s, 276 MB/s 00:07:25.397 03:35:40 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:07:25.397 03:35:40 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:07:25.397 03:35:40 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n3 00:07:25.397 03:35:40 -- scripts/common.sh@378 -- # local block=/dev/nvme2n3 pt 00:07:25.397 03:35:40 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:07:25.397 No valid GPT data, bailing 00:07:25.397 03:35:40 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:07:25.397 03:35:40 -- scripts/common.sh@391 -- # pt= 00:07:25.397 03:35:40 -- scripts/common.sh@392 -- # return 1 00:07:25.397 03:35:40 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:07:25.397 1+0 records in 00:07:25.397 1+0 records out 00:07:25.397 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00379856 s, 276 MB/s 00:07:25.397 03:35:40 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:07:25.397 03:35:40 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:07:25.397 03:35:40 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:07:25.397 03:35:40 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:07:25.397 03:35:40 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:07:25.397 No valid GPT data, bailing 00:07:25.397 03:35:40 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:07:25.397 03:35:40 -- scripts/common.sh@391 -- # pt= 00:07:25.397 03:35:40 -- scripts/common.sh@392 -- # return 1 00:07:25.397 03:35:40 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:07:25.397 1+0 records in 00:07:25.397 1+0 records out 00:07:25.397 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00345461 s, 304 MB/s 00:07:25.397 03:35:40 -- spdk/autotest.sh@118 -- # sync 00:07:25.397 03:35:40 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:07:25.397 03:35:40 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:07:25.397 03:35:40 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:07:27.301 03:35:41 -- spdk/autotest.sh@124 -- # uname -s 00:07:27.301 03:35:41 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:07:27.301 03:35:41 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:07:27.301 03:35:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:27.301 03:35:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.301 03:35:41 -- common/autotest_common.sh@10 -- # set +x 00:07:27.301 ************************************ 00:07:27.301 START TEST setup.sh 00:07:27.301 ************************************ 00:07:27.301 03:35:41 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:07:27.301 * Looking for test storage... 00:07:27.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:07:27.301 03:35:42 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:07:27.301 03:35:42 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:07:27.301 03:35:42 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:07:27.301 03:35:42 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:27.301 03:35:42 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.301 03:35:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:27.301 ************************************ 00:07:27.301 START TEST acl 00:07:27.301 ************************************ 00:07:27.301 03:35:42 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:07:27.301 * Looking for test storage... 00:07:27.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:07:27.301 03:35:42 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:07:27.301 03:35:42 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:07:27.301 03:35:42 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:07:27.301 03:35:42 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:07:27.301 03:35:42 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:27.301 03:35:42 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:07:27.301 03:35:42 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:07:27.301 03:35:42 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:27.301 03:35:42 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:27.301 03:35:42 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:27.301 03:35:42 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:07:27.301 03:35:42 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:07:27.301 03:35:42 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:27.301 03:35:42 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:27.301 03:35:42 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:27.301 03:35:42 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:07:27.301 03:35:42 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:07:27.301 03:35:42 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:27.301 03:35:42 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:27.301 03:35:42 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:27.301 03:35:42 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:07:27.301 03:35:42 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:07:27.301 03:35:42 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:27.301 03:35:42 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:27.301 03:35:42 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:27.301 03:35:42 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:07:27.301 03:35:42 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:07:27.301 03:35:42 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:27.301 03:35:42 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:27.301 03:35:42 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:27.301 03:35:42 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:07:27.301 03:35:42 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:07:27.301 03:35:42 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:27.301 03:35:42 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:27.301 03:35:42 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:27.301 03:35:42 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:07:27.301 03:35:42 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:07:27.301 03:35:42 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:27.301 03:35:42 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:27.301 03:35:42 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:07:27.301 03:35:42 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:07:27.301 03:35:42 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:07:27.301 03:35:42 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:07:27.301 03:35:42 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:07:27.301 03:35:42 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:27.301 03:35:42 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:28.676 03:35:43 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:07:28.676 03:35:43 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:07:28.676 03:35:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:28.676 03:35:43 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:07:28.676 03:35:43 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:07:28.676 03:35:43 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:07:28.933 03:35:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:07:28.933 03:35:43 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:07:28.933 03:35:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:29.213 Hugepages 00:07:29.214 node hugesize free / total 00:07:29.472 03:35:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:07:29.472 03:35:44 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:07:29.472 03:35:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:29.472 00:07:29.472 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:29.472 03:35:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:07:29.472 03:35:44 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:07:29.472 03:35:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:29.472 03:35:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:07:29.472 03:35:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:07:29.472 03:35:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:07:29.472 03:35:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:29.472 03:35:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:07:29.472 03:35:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:07:29.472 03:35:44 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:07:29.472 03:35:44 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:07:29.472 03:35:44 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:07:29.472 03:35:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:29.472 03:35:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:07:29.472 03:35:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:07:29.472 03:35:44 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:07:29.472 03:35:44 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:07:29.472 03:35:44 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:07:29.472 03:35:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:29.732 03:35:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:12.0 == *:*:*.* ]] 00:07:29.732 03:35:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:07:29.732 03:35:44 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:07:29.732 03:35:44 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:07:29.732 03:35:44 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:07:29.732 03:35:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:29.732 03:35:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:13.0 == *:*:*.* ]] 00:07:29.732 03:35:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:07:29.732 03:35:44 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:07:29.732 03:35:44 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:07:29.732 03:35:44 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:07:29.732 03:35:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:29.732 03:35:44 setup.sh.acl -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:07:29.732 03:35:44 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:07:29.732 03:35:44 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:29.732 03:35:44 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.732 03:35:44 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:07:29.732 ************************************ 00:07:29.732 START TEST denied 00:07:29.732 ************************************ 00:07:29.732 03:35:44 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:07:29.732 03:35:44 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:07:29.732 03:35:44 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:07:29.732 03:35:44 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:07:29.732 03:35:44 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:07:29.732 03:35:44 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:31.106 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:07:31.106 03:35:45 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:07:31.106 03:35:45 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:07:31.106 03:35:45 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:07:31.106 03:35:45 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:07:31.106 03:35:45 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:07:31.106 03:35:45 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:07:31.106 03:35:45 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:07:31.106 03:35:45 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:07:31.106 03:35:45 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:31.106 03:35:45 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:37.674 00:07:37.674 real 0m7.068s 00:07:37.674 user 0m0.808s 00:07:37.674 sys 0m1.309s 00:07:37.674 03:35:51 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.674 ************************************ 00:07:37.674 END TEST denied 00:07:37.674 ************************************ 00:07:37.674 03:35:51 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:07:37.674 03:35:51 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:07:37.674 03:35:51 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:07:37.674 03:35:51 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:37.674 03:35:51 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.674 03:35:51 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:07:37.674 ************************************ 00:07:37.674 START TEST allowed 00:07:37.674 ************************************ 00:07:37.674 03:35:51 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:07:37.674 03:35:51 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:07:37.674 03:35:51 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:07:37.674 03:35:51 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:07:37.674 03:35:51 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:07:37.674 03:35:51 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:37.932 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:37.932 03:35:52 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:37.932 03:35:52 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:07:37.932 03:35:52 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:07:37.932 03:35:52 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:07:37.932 03:35:52 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:07:37.932 03:35:52 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:07:37.932 03:35:52 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:07:37.932 03:35:52 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:07:37.932 03:35:52 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:12.0 ]] 00:07:37.932 03:35:52 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:12.0/driver 00:07:37.932 03:35:52 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:07:37.932 03:35:52 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:07:37.932 03:35:52 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:07:37.932 03:35:52 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:13.0 ]] 00:07:37.932 03:35:52 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:13.0/driver 00:07:37.932 03:35:52 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:07:37.932 03:35:52 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:07:37.932 03:35:52 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:07:37.932 03:35:52 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:37.932 03:35:52 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:38.865 00:07:38.865 real 0m2.017s 00:07:38.865 user 0m0.939s 00:07:38.865 sys 0m1.083s 00:07:38.865 03:35:53 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.865 03:35:53 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:07:38.865 ************************************ 00:07:38.865 END TEST allowed 00:07:38.865 ************************************ 00:07:38.865 03:35:53 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:07:38.865 ************************************ 00:07:38.865 END TEST acl 00:07:38.865 ************************************ 00:07:38.865 00:07:38.865 real 0m11.549s 00:07:38.865 user 0m2.960s 00:07:38.865 sys 0m3.648s 00:07:38.865 03:35:53 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.865 03:35:53 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:07:38.865 03:35:53 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:07:38.865 03:35:53 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:07:38.865 03:35:53 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:38.865 03:35:53 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.865 03:35:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:38.865 ************************************ 00:07:38.865 START TEST hugepages 00:07:38.865 ************************************ 00:07:38.865 03:35:53 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:07:38.865 * Looking for test storage... 00:07:38.865 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 5820256 kB' 'MemAvailable: 7405104 kB' 'Buffers: 2436 kB' 'Cached: 1798132 kB' 'SwapCached: 0 kB' 'Active: 444872 kB' 'Inactive: 1458056 kB' 'Active(anon): 112872 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1458056 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 104000 kB' 'Mapped: 48548 kB' 'Shmem: 10512 kB' 'KReclaimable: 63460 kB' 'Slab: 136516 kB' 'SReclaimable: 63460 kB' 'SUnreclaim: 73056 kB' 'KernelStack: 6492 kB' 'PageTables: 3912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 327244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.865 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.866 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:38.867 03:35:53 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:39.125 03:35:53 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:07:39.125 03:35:53 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:07:39.125 03:35:53 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:07:39.125 03:35:53 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:39.125 03:35:53 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.125 03:35:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:39.125 ************************************ 00:07:39.125 START TEST default_setup 00:07:39.125 ************************************ 00:07:39.125 03:35:53 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:07:39.125 03:35:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:07:39.125 03:35:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:07:39.125 03:35:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:07:39.125 03:35:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:07:39.125 03:35:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:07:39.125 03:35:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:07:39.125 03:35:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:39.125 03:35:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:07:39.125 03:35:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:07:39.125 03:35:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:07:39.125 03:35:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:07:39.125 03:35:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:39.125 03:35:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:39.125 03:35:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:39.125 03:35:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:39.125 03:35:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:07:39.125 03:35:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:07:39.125 03:35:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:07:39.125 03:35:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:07:39.125 03:35:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:07:39.125 03:35:53 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:07:39.125 03:35:53 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:39.397 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:39.974 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:40.236 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:40.236 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:40.236 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7905944 kB' 'MemAvailable: 9490500 kB' 'Buffers: 2436 kB' 'Cached: 1798116 kB' 'SwapCached: 0 kB' 'Active: 462192 kB' 'Inactive: 1458072 kB' 'Active(anon): 130192 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1458072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 121300 kB' 'Mapped: 48680 kB' 'Shmem: 10472 kB' 'KReclaimable: 62844 kB' 'Slab: 135752 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72908 kB' 'KernelStack: 6480 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.236 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7905944 kB' 'MemAvailable: 9490500 kB' 'Buffers: 2436 kB' 'Cached: 1798116 kB' 'SwapCached: 0 kB' 'Active: 462044 kB' 'Inactive: 1458072 kB' 'Active(anon): 130044 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1458072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 121148 kB' 'Mapped: 48608 kB' 'Shmem: 10472 kB' 'KReclaimable: 62844 kB' 'Slab: 135752 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72908 kB' 'KernelStack: 6496 kB' 'PageTables: 4076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.237 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7905692 kB' 'MemAvailable: 9490248 kB' 'Buffers: 2436 kB' 'Cached: 1798116 kB' 'SwapCached: 0 kB' 'Active: 461676 kB' 'Inactive: 1458072 kB' 'Active(anon): 129676 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1458072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 120836 kB' 'Mapped: 48868 kB' 'Shmem: 10472 kB' 'KReclaimable: 62844 kB' 'Slab: 135752 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72908 kB' 'KernelStack: 6496 kB' 'PageTables: 4052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.238 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:07:40.239 nr_hugepages=1024 00:07:40.239 resv_hugepages=0 00:07:40.239 surplus_hugepages=0 00:07:40.239 anon_hugepages=0 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7905692 kB' 'MemAvailable: 9490248 kB' 'Buffers: 2436 kB' 'Cached: 1798116 kB' 'SwapCached: 0 kB' 'Active: 462152 kB' 'Inactive: 1458072 kB' 'Active(anon): 130152 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1458072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 121308 kB' 'Mapped: 48608 kB' 'Shmem: 10472 kB' 'KReclaimable: 62844 kB' 'Slab: 135752 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72908 kB' 'KernelStack: 6512 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.239 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.240 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7905692 kB' 'MemUsed: 4336280 kB' 'SwapCached: 0 kB' 'Active: 462056 kB' 'Inactive: 1458072 kB' 'Active(anon): 130056 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1458072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'FilePages: 1800552 kB' 'Mapped: 48608 kB' 'AnonPages: 121196 kB' 'Shmem: 10472 kB' 'KernelStack: 6496 kB' 'PageTables: 4084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62844 kB' 'Slab: 135752 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72908 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.500 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:07:40.501 node0=1024 expecting 1024 00:07:40.501 ************************************ 00:07:40.501 END TEST default_setup 00:07:40.501 ************************************ 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:40.501 00:07:40.501 real 0m1.386s 00:07:40.501 user 0m0.611s 00:07:40.501 sys 0m0.726s 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.501 03:35:55 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:07:40.501 03:35:55 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:07:40.501 03:35:55 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:07:40.501 03:35:55 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:40.501 03:35:55 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.501 03:35:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:40.501 ************************************ 00:07:40.501 START TEST per_node_1G_alloc 00:07:40.501 ************************************ 00:07:40.501 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:07:40.501 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:07:40.501 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:07:40.501 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:07:40.501 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:07:40.502 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:07:40.502 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:07:40.502 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:07:40.502 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:40.502 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:07:40.502 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:07:40.502 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:07:40.502 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:40.502 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:07:40.502 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:40.502 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:40.502 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:40.502 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:07:40.502 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:07:40.502 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:07:40.502 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:07:40.502 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:07:40.502 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:07:40.502 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:07:40.502 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:40.502 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:40.761 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:41.024 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:41.024 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:41.024 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:41.024 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8954540 kB' 'MemAvailable: 10539104 kB' 'Buffers: 2436 kB' 'Cached: 1798116 kB' 'SwapCached: 0 kB' 'Active: 462812 kB' 'Inactive: 1458080 kB' 'Active(anon): 130812 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1458080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 121656 kB' 'Mapped: 48936 kB' 'Shmem: 10472 kB' 'KReclaimable: 62844 kB' 'Slab: 135764 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72920 kB' 'KernelStack: 6600 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 347036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.024 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:41.025 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8954428 kB' 'MemAvailable: 10538992 kB' 'Buffers: 2436 kB' 'Cached: 1798116 kB' 'SwapCached: 0 kB' 'Active: 461868 kB' 'Inactive: 1458080 kB' 'Active(anon): 129868 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1458080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 120972 kB' 'Mapped: 48736 kB' 'Shmem: 10472 kB' 'KReclaimable: 62844 kB' 'Slab: 135772 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72928 kB' 'KernelStack: 6480 kB' 'PageTables: 4044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 346668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.026 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:41.027 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8954428 kB' 'MemAvailable: 10538992 kB' 'Buffers: 2436 kB' 'Cached: 1798116 kB' 'SwapCached: 0 kB' 'Active: 461916 kB' 'Inactive: 1458080 kB' 'Active(anon): 129916 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1458080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 120996 kB' 'Mapped: 48676 kB' 'Shmem: 10472 kB' 'KReclaimable: 62844 kB' 'Slab: 135772 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72928 kB' 'KernelStack: 6464 kB' 'PageTables: 3996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 347036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.028 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.029 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:07:41.030 nr_hugepages=512 00:07:41.030 resv_hugepages=0 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:41.030 surplus_hugepages=0 00:07:41.030 anon_hugepages=0 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8954428 kB' 'MemAvailable: 10538996 kB' 'Buffers: 2436 kB' 'Cached: 1798120 kB' 'SwapCached: 0 kB' 'Active: 462016 kB' 'Inactive: 1458084 kB' 'Active(anon): 130016 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1458084 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 121124 kB' 'Mapped: 48608 kB' 'Shmem: 10472 kB' 'KReclaimable: 62844 kB' 'Slab: 135764 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72920 kB' 'KernelStack: 6480 kB' 'PageTables: 4028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 347036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.030 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.031 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8954428 kB' 'MemUsed: 3287544 kB' 'SwapCached: 0 kB' 'Active: 461776 kB' 'Inactive: 1458084 kB' 'Active(anon): 129776 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1458084 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'FilePages: 1800556 kB' 'Mapped: 48608 kB' 'AnonPages: 121124 kB' 'Shmem: 10472 kB' 'KernelStack: 6480 kB' 'PageTables: 4028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62844 kB' 'Slab: 135764 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72920 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.032 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:41.033 node0=512 expecting 512 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:07:41.033 00:07:41.033 real 0m0.704s 00:07:41.033 user 0m0.320s 00:07:41.033 sys 0m0.390s 00:07:41.033 ************************************ 00:07:41.033 END TEST per_node_1G_alloc 00:07:41.033 ************************************ 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.033 03:35:55 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:41.292 03:35:55 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:07:41.292 03:35:55 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:07:41.292 03:35:55 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:41.292 03:35:55 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.292 03:35:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:41.292 ************************************ 00:07:41.292 START TEST even_2G_alloc 00:07:41.293 ************************************ 00:07:41.293 03:35:55 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:07:41.293 03:35:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:07:41.293 03:35:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:07:41.293 03:35:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:07:41.293 03:35:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:41.293 03:35:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:07:41.293 03:35:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:07:41.293 03:35:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:41.293 03:35:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:41.293 03:35:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:41.293 03:35:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:41.293 03:35:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:41.293 03:35:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:41.293 03:35:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:41.293 03:35:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:07:41.293 03:35:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:41.293 03:35:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:07:41.293 03:35:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:07:41.293 03:35:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:07:41.293 03:35:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:41.293 03:35:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:07:41.293 03:35:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:07:41.293 03:35:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:07:41.293 03:35:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:41.293 03:35:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:41.552 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:41.552 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:41.552 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:41.552 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:41.552 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7915868 kB' 'MemAvailable: 9500436 kB' 'Buffers: 2436 kB' 'Cached: 1798120 kB' 'SwapCached: 0 kB' 'Active: 462500 kB' 'Inactive: 1458084 kB' 'Active(anon): 130500 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1458084 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 121600 kB' 'Mapped: 48800 kB' 'Shmem: 10472 kB' 'KReclaimable: 62844 kB' 'Slab: 135808 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72964 kB' 'KernelStack: 6472 kB' 'PageTables: 4124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.815 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.816 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7916036 kB' 'MemAvailable: 9500604 kB' 'Buffers: 2436 kB' 'Cached: 1798120 kB' 'SwapCached: 0 kB' 'Active: 461808 kB' 'Inactive: 1458084 kB' 'Active(anon): 129808 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1458084 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 121216 kB' 'Mapped: 48608 kB' 'Shmem: 10472 kB' 'KReclaimable: 62844 kB' 'Slab: 135808 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72964 kB' 'KernelStack: 6496 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.817 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.818 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7916648 kB' 'MemAvailable: 9501216 kB' 'Buffers: 2436 kB' 'Cached: 1798120 kB' 'SwapCached: 0 kB' 'Active: 461864 kB' 'Inactive: 1458084 kB' 'Active(anon): 129864 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1458084 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 121224 kB' 'Mapped: 48608 kB' 'Shmem: 10472 kB' 'KReclaimable: 62844 kB' 'Slab: 135808 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72964 kB' 'KernelStack: 6496 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.819 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.820 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:41.821 nr_hugepages=1024 00:07:41.821 resv_hugepages=0 00:07:41.821 surplus_hugepages=0 00:07:41.821 anon_hugepages=0 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7916648 kB' 'MemAvailable: 9501216 kB' 'Buffers: 2436 kB' 'Cached: 1798120 kB' 'SwapCached: 0 kB' 'Active: 461864 kB' 'Inactive: 1458084 kB' 'Active(anon): 129864 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1458084 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 121224 kB' 'Mapped: 48608 kB' 'Shmem: 10472 kB' 'KReclaimable: 62844 kB' 'Slab: 135808 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72964 kB' 'KernelStack: 6496 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.821 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.822 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7916648 kB' 'MemUsed: 4325324 kB' 'SwapCached: 0 kB' 'Active: 461772 kB' 'Inactive: 1458084 kB' 'Active(anon): 129772 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1458084 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'FilePages: 1800556 kB' 'Mapped: 48608 kB' 'AnonPages: 121132 kB' 'Shmem: 10472 kB' 'KernelStack: 6480 kB' 'PageTables: 4028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62844 kB' 'Slab: 135808 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72964 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.823 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:41.824 node0=1024 expecting 1024 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:41.824 00:07:41.824 real 0m0.678s 00:07:41.824 user 0m0.318s 00:07:41.824 sys 0m0.368s 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.824 03:35:56 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:41.824 ************************************ 00:07:41.824 END TEST even_2G_alloc 00:07:41.824 ************************************ 00:07:41.824 03:35:56 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:07:41.824 03:35:56 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:07:41.824 03:35:56 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:41.824 03:35:56 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.824 03:35:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:41.824 ************************************ 00:07:41.824 START TEST odd_alloc 00:07:41.824 ************************************ 00:07:41.824 03:35:56 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:07:41.824 03:35:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:07:41.824 03:35:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:07:41.824 03:35:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:07:41.824 03:35:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:41.824 03:35:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:07:41.824 03:35:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:07:41.824 03:35:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:41.824 03:35:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:41.824 03:35:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:07:41.824 03:35:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:41.824 03:35:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:41.825 03:35:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:41.825 03:35:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:41.825 03:35:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:07:41.825 03:35:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:41.825 03:35:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:07:41.825 03:35:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:07:41.825 03:35:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:07:41.825 03:35:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:41.825 03:35:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:07:41.825 03:35:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:07:41.825 03:35:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:07:41.825 03:35:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:41.825 03:35:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:42.394 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:42.394 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:42.394 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:42.394 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:42.394 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7908388 kB' 'MemAvailable: 9492960 kB' 'Buffers: 2436 kB' 'Cached: 1798124 kB' 'SwapCached: 0 kB' 'Active: 462512 kB' 'Inactive: 1458088 kB' 'Active(anon): 130512 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1458088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 121576 kB' 'Mapped: 48492 kB' 'Shmem: 10472 kB' 'KReclaimable: 62844 kB' 'Slab: 135808 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72964 kB' 'KernelStack: 6452 kB' 'PageTables: 4020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 347036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.394 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.395 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7908644 kB' 'MemAvailable: 9493216 kB' 'Buffers: 2436 kB' 'Cached: 1798124 kB' 'SwapCached: 0 kB' 'Active: 462108 kB' 'Inactive: 1458088 kB' 'Active(anon): 130108 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1458088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 121212 kB' 'Mapped: 48608 kB' 'Shmem: 10472 kB' 'KReclaimable: 62844 kB' 'Slab: 135828 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72984 kB' 'KernelStack: 6480 kB' 'PageTables: 4028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 347036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.396 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:42.397 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7908644 kB' 'MemAvailable: 9493216 kB' 'Buffers: 2436 kB' 'Cached: 1798124 kB' 'SwapCached: 0 kB' 'Active: 461832 kB' 'Inactive: 1458088 kB' 'Active(anon): 129832 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1458088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 120932 kB' 'Mapped: 48608 kB' 'Shmem: 10472 kB' 'KReclaimable: 62844 kB' 'Slab: 135828 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72984 kB' 'KernelStack: 6480 kB' 'PageTables: 4024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 347036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.398 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.399 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.399 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.399 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.399 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.399 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.399 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.399 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.399 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.399 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.399 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.399 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.399 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.399 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.399 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.399 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.399 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.399 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.399 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.399 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.399 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.399 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.399 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.399 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.399 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.399 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.399 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.399 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.399 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.399 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.399 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.399 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.399 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.399 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:42.689 nr_hugepages=1025 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:07:42.689 resv_hugepages=0 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:42.689 surplus_hugepages=0 00:07:42.689 03:35:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:42.689 anon_hugepages=0 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7908644 kB' 'MemAvailable: 9493216 kB' 'Buffers: 2436 kB' 'Cached: 1798124 kB' 'SwapCached: 0 kB' 'Active: 462112 kB' 'Inactive: 1458088 kB' 'Active(anon): 130112 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1458088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 121212 kB' 'Mapped: 48608 kB' 'Shmem: 10472 kB' 'KReclaimable: 62844 kB' 'Slab: 135820 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72976 kB' 'KernelStack: 6480 kB' 'PageTables: 4024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 347036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.690 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.691 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7910160 kB' 'MemUsed: 4331812 kB' 'SwapCached: 0 kB' 'Active: 462136 kB' 'Inactive: 1458088 kB' 'Active(anon): 130136 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1458088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 1800560 kB' 'Mapped: 48608 kB' 'AnonPages: 121252 kB' 'Shmem: 10472 kB' 'KernelStack: 6496 kB' 'PageTables: 4076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62844 kB' 'Slab: 135816 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72972 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.692 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:42.693 node0=1025 expecting 1025 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:07:42.693 00:07:42.693 real 0m0.672s 00:07:42.693 user 0m0.332s 00:07:42.693 sys 0m0.384s 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.693 03:35:57 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:42.693 ************************************ 00:07:42.693 END TEST odd_alloc 00:07:42.693 ************************************ 00:07:42.693 03:35:57 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:07:42.693 03:35:57 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:07:42.693 03:35:57 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:42.693 03:35:57 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.693 03:35:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:42.693 ************************************ 00:07:42.693 START TEST custom_alloc 00:07:42.693 ************************************ 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:42.694 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:42.953 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:43.216 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:43.216 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:43.216 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:43.216 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8960836 kB' 'MemAvailable: 10545408 kB' 'Buffers: 2436 kB' 'Cached: 1798124 kB' 'SwapCached: 0 kB' 'Active: 462240 kB' 'Inactive: 1458088 kB' 'Active(anon): 130240 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1458088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 121316 kB' 'Mapped: 48728 kB' 'Shmem: 10472 kB' 'KReclaimable: 62844 kB' 'Slab: 135796 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72952 kB' 'KernelStack: 6424 kB' 'PageTables: 3936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 347036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.216 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8960588 kB' 'MemAvailable: 10545160 kB' 'Buffers: 2436 kB' 'Cached: 1798124 kB' 'SwapCached: 0 kB' 'Active: 462352 kB' 'Inactive: 1458088 kB' 'Active(anon): 130352 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1458088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 121688 kB' 'Mapped: 48728 kB' 'Shmem: 10472 kB' 'KReclaimable: 62844 kB' 'Slab: 135816 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72972 kB' 'KernelStack: 6452 kB' 'PageTables: 3964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 347036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.217 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.218 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8960588 kB' 'MemAvailable: 10545160 kB' 'Buffers: 2436 kB' 'Cached: 1798124 kB' 'SwapCached: 0 kB' 'Active: 461920 kB' 'Inactive: 1458088 kB' 'Active(anon): 129920 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1458088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 121276 kB' 'Mapped: 48596 kB' 'Shmem: 10472 kB' 'KReclaimable: 62844 kB' 'Slab: 135816 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72972 kB' 'KernelStack: 6468 kB' 'PageTables: 4004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 347036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.219 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.220 03:35:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.220 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.220 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.220 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.220 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.220 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.220 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.220 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.220 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.220 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.220 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.220 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.220 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.220 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.220 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.220 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.220 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.220 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.220 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.220 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.220 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.220 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.220 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.220 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.220 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.220 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.220 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.220 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.220 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.220 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.220 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.220 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.220 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.220 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.220 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.220 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.220 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:43.221 nr_hugepages=512 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:07:43.221 resv_hugepages=0 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:43.221 surplus_hugepages=0 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:43.221 anon_hugepages=0 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.221 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8960588 kB' 'MemAvailable: 10545160 kB' 'Buffers: 2436 kB' 'Cached: 1798124 kB' 'SwapCached: 0 kB' 'Active: 461984 kB' 'Inactive: 1458088 kB' 'Active(anon): 129984 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1458088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 121084 kB' 'Mapped: 48596 kB' 'Shmem: 10472 kB' 'KReclaimable: 62844 kB' 'Slab: 135816 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72972 kB' 'KernelStack: 6452 kB' 'PageTables: 3956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 347036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.222 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8960588 kB' 'MemUsed: 3281384 kB' 'SwapCached: 0 kB' 'Active: 462068 kB' 'Inactive: 1458088 kB' 'Active(anon): 130068 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1458088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 1800560 kB' 'Mapped: 48596 kB' 'AnonPages: 121432 kB' 'Shmem: 10472 kB' 'KernelStack: 6468 kB' 'PageTables: 4004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62844 kB' 'Slab: 135816 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72972 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.223 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.224 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.225 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.225 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.225 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.225 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.225 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.225 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.225 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.225 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.225 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:43.225 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.225 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.225 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:43.225 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:07:43.225 03:35:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:43.225 03:35:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:43.225 03:35:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:43.225 03:35:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:43.225 03:35:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:43.225 node0=512 expecting 512 00:07:43.225 03:35:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:07:43.225 03:35:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:07:43.225 00:07:43.225 real 0m0.658s 00:07:43.225 user 0m0.328s 00:07:43.225 sys 0m0.375s 00:07:43.225 03:35:58 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.225 03:35:58 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:43.225 ************************************ 00:07:43.225 END TEST custom_alloc 00:07:43.225 ************************************ 00:07:43.225 03:35:58 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:07:43.225 03:35:58 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:07:43.225 03:35:58 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:43.225 03:35:58 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.225 03:35:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:43.225 ************************************ 00:07:43.225 START TEST no_shrink_alloc 00:07:43.225 ************************************ 00:07:43.483 03:35:58 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:07:43.483 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:07:43.483 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:07:43.483 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:07:43.483 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:07:43.483 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:07:43.483 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:07:43.483 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:43.483 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:07:43.483 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:07:43.483 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:07:43.483 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:43.483 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:43.483 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:43.483 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:43.483 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:43.483 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:07:43.483 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:07:43.483 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:07:43.483 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:07:43.483 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:07:43.483 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:43.483 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:43.743 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:43.743 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:43.743 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:43.743 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:43.743 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:43.743 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:07:43.743 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:07:43.743 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:43.743 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:43.743 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:43.743 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:43.743 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:43.743 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:43.743 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:43.743 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:43.743 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:43.743 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:43.743 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:43.743 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:43.743 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:43.743 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:43.743 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:43.743 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:43.743 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.743 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.743 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7915524 kB' 'MemAvailable: 9500092 kB' 'Buffers: 2436 kB' 'Cached: 1798120 kB' 'SwapCached: 0 kB' 'Active: 462364 kB' 'Inactive: 1458084 kB' 'Active(anon): 130364 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1458084 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 121448 kB' 'Mapped: 48792 kB' 'Shmem: 10472 kB' 'KReclaimable: 62844 kB' 'Slab: 135784 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72940 kB' 'KernelStack: 6520 kB' 'PageTables: 4044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:07:43.743 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.743 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.743 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.743 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.743 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.743 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.744 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:43.745 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:44.007 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.007 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.007 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7915416 kB' 'MemAvailable: 9499984 kB' 'Buffers: 2436 kB' 'Cached: 1798120 kB' 'SwapCached: 0 kB' 'Active: 461924 kB' 'Inactive: 1458084 kB' 'Active(anon): 129924 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1458084 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 121028 kB' 'Mapped: 48612 kB' 'Shmem: 10472 kB' 'KReclaimable: 62844 kB' 'Slab: 135772 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72928 kB' 'KernelStack: 6496 kB' 'PageTables: 4076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:07:44.007 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.007 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.007 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.007 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.007 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.007 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.007 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.007 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.007 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.007 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.007 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.007 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.007 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.007 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.007 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.007 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.007 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.007 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.007 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.007 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.008 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.009 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7915844 kB' 'MemAvailable: 9500412 kB' 'Buffers: 2436 kB' 'Cached: 1798120 kB' 'SwapCached: 0 kB' 'Active: 461908 kB' 'Inactive: 1458084 kB' 'Active(anon): 129908 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1458084 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 121008 kB' 'Mapped: 48612 kB' 'Shmem: 10472 kB' 'KReclaimable: 62844 kB' 'Slab: 135768 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72924 kB' 'KernelStack: 6496 kB' 'PageTables: 4076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.010 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:44.011 nr_hugepages=1024 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:44.011 resv_hugepages=0 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:44.011 surplus_hugepages=0 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:44.011 anon_hugepages=0 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:44.011 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7915844 kB' 'MemAvailable: 9500412 kB' 'Buffers: 2436 kB' 'Cached: 1798120 kB' 'SwapCached: 0 kB' 'Active: 462148 kB' 'Inactive: 1458084 kB' 'Active(anon): 130148 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1458084 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 121252 kB' 'Mapped: 48612 kB' 'Shmem: 10472 kB' 'KReclaimable: 62844 kB' 'Slab: 135768 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72924 kB' 'KernelStack: 6496 kB' 'PageTables: 4076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.012 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.013 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7915844 kB' 'MemUsed: 4326128 kB' 'SwapCached: 0 kB' 'Active: 461884 kB' 'Inactive: 1458084 kB' 'Active(anon): 129884 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1458084 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1800556 kB' 'Mapped: 48612 kB' 'AnonPages: 121028 kB' 'Shmem: 10472 kB' 'KernelStack: 6496 kB' 'PageTables: 4076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62844 kB' 'Slab: 135764 kB' 'SReclaimable: 62844 kB' 'SUnreclaim: 72920 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.014 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:44.015 node0=1024 expecting 1024 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:44.015 03:35:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:44.273 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:44.535 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:44.535 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:44.535 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:44.535 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:44.535 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7917628 kB' 'MemAvailable: 9502176 kB' 'Buffers: 2436 kB' 'Cached: 1798124 kB' 'SwapCached: 0 kB' 'Active: 460264 kB' 'Inactive: 1458088 kB' 'Active(anon): 128264 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1458088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 119364 kB' 'Mapped: 48116 kB' 'Shmem: 10472 kB' 'KReclaimable: 62800 kB' 'Slab: 135620 kB' 'SReclaimable: 62800 kB' 'SUnreclaim: 72820 kB' 'KernelStack: 6452 kB' 'PageTables: 3884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.535 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.536 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7917628 kB' 'MemAvailable: 9502176 kB' 'Buffers: 2436 kB' 'Cached: 1798124 kB' 'SwapCached: 0 kB' 'Active: 459640 kB' 'Inactive: 1458088 kB' 'Active(anon): 127640 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1458088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 118704 kB' 'Mapped: 47872 kB' 'Shmem: 10472 kB' 'KReclaimable: 62800 kB' 'Slab: 135564 kB' 'SReclaimable: 62800 kB' 'SUnreclaim: 72764 kB' 'KernelStack: 6416 kB' 'PageTables: 3716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.537 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.538 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7917628 kB' 'MemAvailable: 9502176 kB' 'Buffers: 2436 kB' 'Cached: 1798124 kB' 'SwapCached: 0 kB' 'Active: 459328 kB' 'Inactive: 1458088 kB' 'Active(anon): 127328 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1458088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 118452 kB' 'Mapped: 47872 kB' 'Shmem: 10472 kB' 'KReclaimable: 62800 kB' 'Slab: 135564 kB' 'SReclaimable: 62800 kB' 'SUnreclaim: 72764 kB' 'KernelStack: 6432 kB' 'PageTables: 3768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.539 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.540 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:44.541 nr_hugepages=1024 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:44.541 resv_hugepages=0 00:07:44.541 surplus_hugepages=0 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:44.541 anon_hugepages=0 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7917628 kB' 'MemAvailable: 9502176 kB' 'Buffers: 2436 kB' 'Cached: 1798124 kB' 'SwapCached: 0 kB' 'Active: 459348 kB' 'Inactive: 1458088 kB' 'Active(anon): 127348 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1458088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 118720 kB' 'Mapped: 47872 kB' 'Shmem: 10472 kB' 'KReclaimable: 62800 kB' 'Slab: 135564 kB' 'SReclaimable: 62800 kB' 'SUnreclaim: 72764 kB' 'KernelStack: 6432 kB' 'PageTables: 3768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.541 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.542 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7919348 kB' 'MemUsed: 4322624 kB' 'SwapCached: 0 kB' 'Active: 459308 kB' 'Inactive: 1458088 kB' 'Active(anon): 127308 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1458088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1800560 kB' 'Mapped: 47872 kB' 'AnonPages: 118452 kB' 'Shmem: 10472 kB' 'KernelStack: 6432 kB' 'PageTables: 3768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62800 kB' 'Slab: 135560 kB' 'SReclaimable: 62800 kB' 'SUnreclaim: 72760 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.543 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:44.544 node0=1024 expecting 1024 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:44.544 00:07:44.544 real 0m1.272s 00:07:44.544 user 0m0.649s 00:07:44.544 sys 0m0.708s 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.544 ************************************ 00:07:44.544 END TEST no_shrink_alloc 00:07:44.544 ************************************ 00:07:44.544 03:35:59 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:44.544 03:35:59 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:07:44.545 03:35:59 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:07:44.545 03:35:59 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:07:44.545 03:35:59 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:07:44.545 03:35:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:44.545 03:35:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:44.545 03:35:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:44.545 03:35:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:44.545 03:35:59 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:07:44.545 03:35:59 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:07:44.545 00:07:44.545 real 0m5.767s 00:07:44.545 user 0m2.691s 00:07:44.545 sys 0m3.196s 00:07:44.545 03:35:59 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.545 03:35:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:44.545 ************************************ 00:07:44.545 END TEST hugepages 00:07:44.545 ************************************ 00:07:44.803 03:35:59 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:07:44.803 03:35:59 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:07:44.803 03:35:59 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:44.803 03:35:59 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.803 03:35:59 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:44.803 ************************************ 00:07:44.803 START TEST driver 00:07:44.803 ************************************ 00:07:44.803 03:35:59 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:07:44.803 * Looking for test storage... 00:07:44.803 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:07:44.803 03:35:59 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:07:44.803 03:35:59 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:44.803 03:35:59 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:51.368 03:36:05 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:07:51.368 03:36:05 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:51.368 03:36:05 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.368 03:36:05 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:07:51.368 ************************************ 00:07:51.368 START TEST guess_driver 00:07:51.368 ************************************ 00:07:51.368 03:36:05 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:07:51.368 03:36:05 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:07:51.368 03:36:05 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:07:51.368 03:36:05 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:07:51.368 03:36:05 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:07:51.368 03:36:05 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:07:51.368 03:36:05 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:07:51.368 03:36:05 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:07:51.368 03:36:05 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:07:51.368 03:36:05 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:07:51.368 03:36:05 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:07:51.368 03:36:05 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:07:51.368 03:36:05 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:07:51.368 03:36:05 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:07:51.368 03:36:05 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:07:51.368 03:36:05 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:07:51.368 03:36:05 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:07:51.368 03:36:05 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:07:51.368 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:07:51.368 03:36:05 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:07:51.368 03:36:05 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:07:51.368 03:36:05 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:07:51.368 Looking for driver=uio_pci_generic 00:07:51.368 03:36:05 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:07:51.368 03:36:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:51.368 03:36:05 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:07:51.368 03:36:05 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:07:51.368 03:36:05 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:51.368 03:36:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:07:51.368 03:36:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:07:51.368 03:36:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:51.627 03:36:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:51.627 03:36:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:07:51.627 03:36:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:51.886 03:36:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:51.886 03:36:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:07:51.886 03:36:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:51.886 03:36:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:51.886 03:36:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:07:51.886 03:36:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:51.886 03:36:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:51.886 03:36:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:07:51.886 03:36:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:51.886 03:36:06 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:07:51.886 03:36:06 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:07:51.886 03:36:06 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:51.886 03:36:06 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:58.445 00:07:58.445 real 0m7.111s 00:07:58.445 user 0m0.790s 00:07:58.445 sys 0m1.391s 00:07:58.445 03:36:12 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.445 03:36:12 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:07:58.445 ************************************ 00:07:58.445 END TEST guess_driver 00:07:58.445 ************************************ 00:07:58.445 03:36:12 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:07:58.445 00:07:58.445 real 0m13.133s 00:07:58.445 user 0m1.125s 00:07:58.445 sys 0m2.175s 00:07:58.445 03:36:12 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.445 03:36:12 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:07:58.445 ************************************ 00:07:58.445 END TEST driver 00:07:58.445 ************************************ 00:07:58.445 03:36:12 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:07:58.445 03:36:12 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:07:58.445 03:36:12 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:58.445 03:36:12 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.445 03:36:12 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:58.445 ************************************ 00:07:58.445 START TEST devices 00:07:58.445 ************************************ 00:07:58.445 03:36:12 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:07:58.445 * Looking for test storage... 00:07:58.445 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:07:58.445 03:36:12 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:07:58.445 03:36:12 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:07:58.445 03:36:12 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:58.445 03:36:12 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:59.012 03:36:13 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:07:59.012 03:36:13 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:07:59.012 03:36:13 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:07:59.012 03:36:13 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:07:59.012 03:36:13 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:59.012 03:36:13 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:07:59.012 03:36:13 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:07:59.012 03:36:13 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:59.012 03:36:13 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:59.012 03:36:13 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:59.012 03:36:13 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:07:59.012 03:36:13 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:07:59.012 03:36:13 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:59.012 03:36:13 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:59.012 03:36:13 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:59.012 03:36:13 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:07:59.012 03:36:13 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:07:59.012 03:36:13 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:59.012 03:36:13 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:59.012 03:36:13 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:59.012 03:36:13 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:07:59.012 03:36:13 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:07:59.012 03:36:13 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:59.012 03:36:13 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:59.012 03:36:13 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:59.012 03:36:13 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:07:59.012 03:36:13 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:07:59.012 03:36:13 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:59.012 03:36:13 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:59.012 03:36:13 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:59.012 03:36:13 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:07:59.012 03:36:13 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:07:59.012 03:36:13 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:59.012 03:36:13 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:59.012 03:36:13 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:59.012 03:36:13 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:07:59.012 03:36:13 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:07:59.012 03:36:13 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:59.012 03:36:13 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:59.012 03:36:13 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:07:59.012 03:36:13 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:07:59.012 03:36:13 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:07:59.012 03:36:13 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:07:59.012 03:36:13 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:07:59.012 03:36:13 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:59.012 03:36:13 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:07:59.012 03:36:13 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:07:59.012 03:36:13 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:07:59.012 03:36:13 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:07:59.012 03:36:13 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:07:59.012 03:36:13 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:07:59.012 03:36:13 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:07:59.012 No valid GPT data, bailing 00:07:59.012 03:36:13 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:59.012 03:36:13 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:07:59.012 03:36:13 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:07:59.012 03:36:13 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:07:59.012 03:36:13 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:59.012 03:36:13 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:59.012 03:36:13 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:07:59.012 03:36:13 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:07:59.012 03:36:13 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:07:59.012 03:36:13 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:07:59.012 03:36:13 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:59.012 03:36:13 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:07:59.012 03:36:13 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:07:59.013 03:36:13 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:07:59.013 03:36:13 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:07:59.013 03:36:13 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:07:59.013 03:36:13 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:07:59.013 03:36:13 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:07:59.013 No valid GPT data, bailing 00:07:59.013 03:36:13 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:07:59.013 03:36:13 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:07:59.013 03:36:13 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:07:59.013 03:36:13 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:07:59.013 03:36:13 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:07:59.013 03:36:13 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:07:59.013 03:36:13 setup.sh.devices -- setup/common.sh@80 -- # echo 6343335936 00:07:59.013 03:36:13 setup.sh.devices -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:07:59.013 03:36:13 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:07:59.013 03:36:13 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:07:59.013 03:36:13 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:59.013 03:36:13 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:07:59.013 03:36:13 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:07:59.013 03:36:13 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:07:59.013 03:36:13 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:07:59.013 03:36:13 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:07:59.013 03:36:13 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:07:59.013 03:36:13 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:07:59.272 No valid GPT data, bailing 00:07:59.272 03:36:13 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:07:59.272 03:36:13 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:07:59.272 03:36:13 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:07:59.272 03:36:13 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:07:59.272 03:36:13 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:07:59.272 03:36:13 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:07:59.272 03:36:13 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:07:59.272 03:36:13 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:07:59.272 03:36:13 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:07:59.272 03:36:13 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:07:59.272 03:36:13 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:59.272 03:36:13 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n2 00:07:59.272 03:36:13 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:07:59.272 03:36:13 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:07:59.272 03:36:13 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:07:59.272 03:36:13 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n2 00:07:59.272 03:36:13 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n2 pt 00:07:59.272 03:36:13 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n2 00:07:59.272 No valid GPT data, bailing 00:07:59.272 03:36:14 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:07:59.272 03:36:14 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:07:59.272 03:36:14 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:07:59.272 03:36:14 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n2 00:07:59.272 03:36:14 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n2 00:07:59.272 03:36:14 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n2 ]] 00:07:59.272 03:36:14 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:07:59.272 03:36:14 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:07:59.272 03:36:14 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:07:59.272 03:36:14 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:07:59.272 03:36:14 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:59.272 03:36:14 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n3 00:07:59.272 03:36:14 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:07:59.272 03:36:14 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:07:59.272 03:36:14 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:07:59.272 03:36:14 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n3 00:07:59.272 03:36:14 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n3 pt 00:07:59.272 03:36:14 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n3 00:07:59.272 No valid GPT data, bailing 00:07:59.272 03:36:14 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:07:59.272 03:36:14 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:07:59.272 03:36:14 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:07:59.272 03:36:14 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n3 00:07:59.272 03:36:14 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n3 00:07:59.272 03:36:14 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n3 ]] 00:07:59.272 03:36:14 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:07:59.272 03:36:14 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:07:59.272 03:36:14 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:07:59.272 03:36:14 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:07:59.272 03:36:14 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:59.272 03:36:14 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:07:59.272 03:36:14 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3 00:07:59.272 03:36:14 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:13.0 00:07:59.272 03:36:14 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:07:59.272 03:36:14 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:07:59.272 03:36:14 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:07:59.273 03:36:14 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:07:59.531 No valid GPT data, bailing 00:07:59.531 03:36:14 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:07:59.531 03:36:14 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:07:59.531 03:36:14 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:07:59.531 03:36:14 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:07:59.531 03:36:14 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme3n1 00:07:59.531 03:36:14 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:07:59.531 03:36:14 setup.sh.devices -- setup/common.sh@80 -- # echo 1073741824 00:07:59.531 03:36:14 setup.sh.devices -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:07:59.531 03:36:14 setup.sh.devices -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:07:59.531 03:36:14 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:07:59.531 03:36:14 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:07:59.531 03:36:14 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:59.531 03:36:14 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.531 03:36:14 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:07:59.531 ************************************ 00:07:59.531 START TEST nvme_mount 00:07:59.531 ************************************ 00:07:59.531 03:36:14 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:07:59.531 03:36:14 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:07:59.531 03:36:14 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:07:59.531 03:36:14 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:59.531 03:36:14 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:59.531 03:36:14 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:07:59.531 03:36:14 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:07:59.531 03:36:14 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:07:59.531 03:36:14 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:07:59.531 03:36:14 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:07:59.531 03:36:14 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:07:59.531 03:36:14 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:07:59.531 03:36:14 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:07:59.531 03:36:14 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:59.531 03:36:14 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:59.531 03:36:14 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:07:59.531 03:36:14 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:59.531 03:36:14 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:07:59.531 03:36:14 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:07:59.531 03:36:14 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:08:00.467 Creating new GPT entries in memory. 00:08:00.467 GPT data structures destroyed! You may now partition the disk using fdisk or 00:08:00.467 other utilities. 00:08:00.467 03:36:15 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:08:00.467 03:36:15 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:00.467 03:36:15 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:08:00.467 03:36:15 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:08:00.467 03:36:15 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:08:01.429 Creating new GPT entries in memory. 00:08:01.429 The operation has completed successfully. 00:08:01.429 03:36:16 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:08:01.429 03:36:16 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:01.429 03:36:16 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 59933 00:08:01.429 03:36:16 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:01.429 03:36:16 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:08:01.429 03:36:16 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:01.429 03:36:16 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:08:01.429 03:36:16 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:08:01.702 03:36:16 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:01.702 03:36:16 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:01.702 03:36:16 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:08:01.702 03:36:16 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:08:01.702 03:36:16 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:01.702 03:36:16 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:01.702 03:36:16 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:08:01.702 03:36:16 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:08:01.702 03:36:16 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:08:01.702 03:36:16 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:08:01.702 03:36:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.702 03:36:16 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:08:01.702 03:36:16 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:08:01.702 03:36:16 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:08:01.702 03:36:16 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:01.702 03:36:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:01.702 03:36:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:08:01.702 03:36:16 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:08:01.702 03:36:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.702 03:36:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:01.702 03:36:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.960 03:36:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:01.960 03:36:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.960 03:36:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:01.960 03:36:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.960 03:36:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:01.960 03:36:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.218 03:36:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:02.218 03:36:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.477 03:36:17 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:02.477 03:36:17 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:08:02.477 03:36:17 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:02.477 03:36:17 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:08:02.477 03:36:17 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:02.477 03:36:17 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:08:02.477 03:36:17 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:02.477 03:36:17 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:02.477 03:36:17 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:02.477 03:36:17 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:08:02.477 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:08:02.477 03:36:17 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:08:02.477 03:36:17 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:08:02.736 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:02.736 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:02.736 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:02.736 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:02.736 03:36:17 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:08:02.736 03:36:17 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:08:02.736 03:36:17 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:02.736 03:36:17 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:08:02.736 03:36:17 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:08:02.736 03:36:17 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:02.736 03:36:17 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:02.736 03:36:17 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:08:02.736 03:36:17 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:08:02.736 03:36:17 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:02.736 03:36:17 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:02.736 03:36:17 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:08:02.736 03:36:17 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:08:02.736 03:36:17 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:08:02.736 03:36:17 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:08:02.736 03:36:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.736 03:36:17 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:08:02.736 03:36:17 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:08:02.736 03:36:17 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:08:02.736 03:36:17 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:02.995 03:36:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:02.995 03:36:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:08:02.995 03:36:17 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:08:02.995 03:36:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.995 03:36:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:02.995 03:36:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.995 03:36:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:02.995 03:36:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:03.253 03:36:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:03.253 03:36:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:03.253 03:36:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:03.253 03:36:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:03.511 03:36:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:03.511 03:36:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:03.511 03:36:18 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:03.511 03:36:18 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:08:03.511 03:36:18 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:03.769 03:36:18 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:08:03.769 03:36:18 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:03.769 03:36:18 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:03.769 03:36:18 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:08:03.769 03:36:18 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:08:03.769 03:36:18 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:08:03.769 03:36:18 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:08:03.769 03:36:18 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:08:03.769 03:36:18 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:08:03.769 03:36:18 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:08:03.769 03:36:18 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:08:03.769 03:36:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:03.769 03:36:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:08:03.769 03:36:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:08:03.769 03:36:18 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:08:03.769 03:36:18 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:04.027 03:36:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:04.027 03:36:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:08:04.027 03:36:18 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:08:04.027 03:36:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.027 03:36:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:04.027 03:36:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.027 03:36:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:04.027 03:36:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.027 03:36:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:04.027 03:36:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.027 03:36:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:04.027 03:36:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.286 03:36:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:04.286 03:36:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.544 03:36:19 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:04.544 03:36:19 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:08:04.544 03:36:19 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:08:04.544 03:36:19 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:08:04.544 03:36:19 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:04.544 03:36:19 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:04.544 03:36:19 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:08:04.544 03:36:19 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:08:04.544 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:08:04.544 00:08:04.544 real 0m5.173s 00:08:04.544 user 0m1.428s 00:08:04.544 sys 0m1.444s 00:08:04.544 03:36:19 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.544 03:36:19 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:08:04.544 ************************************ 00:08:04.544 END TEST nvme_mount 00:08:04.544 ************************************ 00:08:04.803 03:36:19 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:08:04.803 03:36:19 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:08:04.803 03:36:19 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:04.803 03:36:19 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.803 03:36:19 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:08:04.803 ************************************ 00:08:04.803 START TEST dm_mount 00:08:04.803 ************************************ 00:08:04.803 03:36:19 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:08:04.803 03:36:19 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:08:04.803 03:36:19 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:08:04.803 03:36:19 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:08:04.803 03:36:19 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:08:04.803 03:36:19 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:08:04.803 03:36:19 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:08:04.803 03:36:19 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:08:04.803 03:36:19 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:08:04.803 03:36:19 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:08:04.803 03:36:19 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:08:04.803 03:36:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:08:04.803 03:36:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:04.803 03:36:19 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:08:04.803 03:36:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:08:04.803 03:36:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:04.803 03:36:19 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:08:04.803 03:36:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:08:04.803 03:36:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:04.803 03:36:19 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:08:04.803 03:36:19 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:08:04.803 03:36:19 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:08:05.739 Creating new GPT entries in memory. 00:08:05.739 GPT data structures destroyed! You may now partition the disk using fdisk or 00:08:05.739 other utilities. 00:08:05.739 03:36:20 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:08:05.739 03:36:20 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:05.739 03:36:20 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:08:05.739 03:36:20 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:08:05.739 03:36:20 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:08:06.673 Creating new GPT entries in memory. 00:08:06.673 The operation has completed successfully. 00:08:06.673 03:36:21 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:08:06.673 03:36:21 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:06.673 03:36:21 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:08:06.673 03:36:21 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:08:06.673 03:36:21 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:08:08.049 The operation has completed successfully. 00:08:08.049 03:36:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:08:08.049 03:36:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:08.049 03:36:22 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 60562 00:08:08.049 03:36:22 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:08:08.049 03:36:22 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:08.049 03:36:22 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:08:08.049 03:36:22 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:08:08.049 03:36:22 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:08:08.049 03:36:22 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:08:08.049 03:36:22 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:08:08.049 03:36:22 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:08:08.049 03:36:22 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:08:08.049 03:36:22 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:08:08.049 03:36:22 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:08:08.049 03:36:22 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:08:08.049 03:36:22 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:08:08.049 03:36:22 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:08.049 03:36:22 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:08:08.049 03:36:22 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:08.049 03:36:22 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:08:08.049 03:36:22 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:08:08.049 03:36:22 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:08.049 03:36:22 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:08:08.049 03:36:22 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:08:08.049 03:36:22 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:08:08.049 03:36:22 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:08.049 03:36:22 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:08:08.049 03:36:22 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:08:08.049 03:36:22 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:08:08.049 03:36:22 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:08:08.049 03:36:22 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:08:08.049 03:36:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:08.049 03:36:22 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:08:08.049 03:36:22 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:08:08.049 03:36:22 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:08:08.049 03:36:22 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:08.049 03:36:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:08.049 03:36:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:08:08.049 03:36:22 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:08:08.050 03:36:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:08.050 03:36:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:08.050 03:36:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:08.308 03:36:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:08.308 03:36:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:08.308 03:36:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:08.308 03:36:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:08.308 03:36:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:08.308 03:36:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:08.568 03:36:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:08.568 03:36:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:08.826 03:36:23 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:08.826 03:36:23 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:08:08.826 03:36:23 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:08.826 03:36:23 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:08:08.826 03:36:23 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:08:08.826 03:36:23 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:08.826 03:36:23 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:08:08.826 03:36:23 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:08:08.826 03:36:23 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:08:08.826 03:36:23 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:08:08.826 03:36:23 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:08:08.826 03:36:23 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:08:08.826 03:36:23 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:08:08.826 03:36:23 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:08:08.826 03:36:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:08.826 03:36:23 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:08:08.826 03:36:23 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:08:08.826 03:36:23 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:08:08.826 03:36:23 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:09.085 03:36:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:09.085 03:36:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:08:09.085 03:36:23 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:08:09.085 03:36:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:09.085 03:36:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:09.085 03:36:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:09.085 03:36:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:09.085 03:36:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:09.344 03:36:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:09.344 03:36:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:09.344 03:36:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:09.344 03:36:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:09.603 03:36:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:09.603 03:36:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:09.861 03:36:24 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:09.861 03:36:24 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:08:09.861 03:36:24 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:08:09.861 03:36:24 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:08:09.861 03:36:24 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:09.861 03:36:24 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:08:09.861 03:36:24 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:08:09.861 03:36:24 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:09.861 03:36:24 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:08:09.861 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:08:09.861 03:36:24 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:08:09.861 03:36:24 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:08:09.861 00:08:09.861 real 0m5.134s 00:08:09.861 user 0m0.941s 00:08:09.861 sys 0m1.062s 00:08:09.861 03:36:24 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.861 ************************************ 00:08:09.861 END TEST dm_mount 00:08:09.861 ************************************ 00:08:09.861 03:36:24 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:08:09.861 03:36:24 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:08:09.861 03:36:24 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:08:09.861 03:36:24 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:08:09.861 03:36:24 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:09.861 03:36:24 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:09.861 03:36:24 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:08:09.861 03:36:24 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:08:09.861 03:36:24 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:08:10.120 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:10.120 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:10.120 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:10.120 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:10.120 03:36:24 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:08:10.120 03:36:24 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:10.120 03:36:24 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:08:10.120 03:36:24 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:10.120 03:36:24 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:08:10.120 03:36:24 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:08:10.120 03:36:24 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:08:10.120 00:08:10.120 real 0m12.286s 00:08:10.120 user 0m3.308s 00:08:10.120 sys 0m3.258s 00:08:10.120 03:36:24 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:10.120 03:36:24 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:08:10.120 ************************************ 00:08:10.120 END TEST devices 00:08:10.120 ************************************ 00:08:10.120 03:36:24 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:08:10.120 00:08:10.120 real 0m43.002s 00:08:10.120 user 0m10.183s 00:08:10.120 sys 0m12.437s 00:08:10.120 03:36:24 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:10.120 ************************************ 00:08:10.120 03:36:24 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:08:10.120 END TEST setup.sh 00:08:10.120 ************************************ 00:08:10.120 03:36:25 -- common/autotest_common.sh@1142 -- # return 0 00:08:10.120 03:36:25 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:10.687 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:11.254 Hugepages 00:08:11.254 node hugesize free / total 00:08:11.254 node0 1048576kB 0 / 0 00:08:11.254 node0 2048kB 2048 / 2048 00:08:11.254 00:08:11.254 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:11.254 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:08:11.254 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:08:11.254 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:08:11.517 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:08:11.517 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:08:11.517 03:36:26 -- spdk/autotest.sh@130 -- # uname -s 00:08:11.517 03:36:26 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:08:11.517 03:36:26 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:08:11.517 03:36:26 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:12.082 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:12.649 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:12.649 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:12.649 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:12.649 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:12.906 03:36:27 -- common/autotest_common.sh@1532 -- # sleep 1 00:08:13.842 03:36:28 -- common/autotest_common.sh@1533 -- # bdfs=() 00:08:13.842 03:36:28 -- common/autotest_common.sh@1533 -- # local bdfs 00:08:13.842 03:36:28 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:08:13.842 03:36:28 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:08:13.842 03:36:28 -- common/autotest_common.sh@1513 -- # bdfs=() 00:08:13.842 03:36:28 -- common/autotest_common.sh@1513 -- # local bdfs 00:08:13.842 03:36:28 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:13.842 03:36:28 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:13.842 03:36:28 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:08:13.842 03:36:28 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:08:13.842 03:36:28 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:13.842 03:36:28 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:14.100 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:14.358 Waiting for block devices as requested 00:08:14.358 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:14.358 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:14.672 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:14.672 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:19.943 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:19.943 03:36:34 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:08:19.943 03:36:34 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:08:19.943 03:36:34 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:08:19.943 03:36:34 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:08:19.943 03:36:34 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:19.943 03:36:34 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:08:19.943 03:36:34 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:19.943 03:36:34 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:08:19.943 03:36:34 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:08:19.943 03:36:34 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:08:19.943 03:36:34 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:08:19.943 03:36:34 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:08:19.943 03:36:34 -- common/autotest_common.sh@1545 -- # grep oacs 00:08:19.943 03:36:34 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:08:19.943 03:36:34 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:08:19.943 03:36:34 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:08:19.943 03:36:34 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:08:19.943 03:36:34 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:08:19.943 03:36:34 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:08:19.943 03:36:34 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:08:19.943 03:36:34 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:08:19.943 03:36:34 -- common/autotest_common.sh@1557 -- # continue 00:08:19.943 03:36:34 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:08:19.943 03:36:34 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:08:19.944 03:36:34 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:08:19.944 03:36:34 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:08:19.944 03:36:34 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:19.944 03:36:34 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:08:19.944 03:36:34 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:19.944 03:36:34 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:08:19.944 03:36:34 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:08:19.944 03:36:34 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:08:19.944 03:36:34 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:08:19.944 03:36:34 -- common/autotest_common.sh@1545 -- # grep oacs 00:08:19.944 03:36:34 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:08:19.944 03:36:34 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:08:19.944 03:36:34 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:08:19.944 03:36:34 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:08:19.944 03:36:34 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:08:19.944 03:36:34 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:08:19.944 03:36:34 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:08:19.944 03:36:34 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:08:19.944 03:36:34 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:08:19.944 03:36:34 -- common/autotest_common.sh@1557 -- # continue 00:08:19.944 03:36:34 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:08:19.944 03:36:34 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:08:19.944 03:36:34 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:08:19.944 03:36:34 -- common/autotest_common.sh@1502 -- # grep 0000:00:12.0/nvme/nvme 00:08:19.944 03:36:34 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:08:19.944 03:36:34 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:08:19.944 03:36:34 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:08:19.944 03:36:34 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme2 00:08:19.944 03:36:34 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme2 00:08:19.944 03:36:34 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme2 ]] 00:08:19.944 03:36:34 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme2 00:08:19.944 03:36:34 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:08:19.944 03:36:34 -- common/autotest_common.sh@1545 -- # grep oacs 00:08:19.944 03:36:34 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:08:19.944 03:36:34 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:08:19.944 03:36:34 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:08:19.944 03:36:34 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme2 00:08:19.944 03:36:34 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:08:19.944 03:36:34 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:08:19.944 03:36:34 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:08:19.944 03:36:34 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:08:19.944 03:36:34 -- common/autotest_common.sh@1557 -- # continue 00:08:19.944 03:36:34 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:08:19.944 03:36:34 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:08:19.944 03:36:34 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:08:19.944 03:36:34 -- common/autotest_common.sh@1502 -- # grep 0000:00:13.0/nvme/nvme 00:08:19.944 03:36:34 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:08:19.944 03:36:34 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:08:19.944 03:36:34 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:08:19.944 03:36:34 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme3 00:08:19.944 03:36:34 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme3 00:08:19.944 03:36:34 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme3 ]] 00:08:19.944 03:36:34 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme3 00:08:19.944 03:36:34 -- common/autotest_common.sh@1545 -- # grep oacs 00:08:19.944 03:36:34 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:08:19.944 03:36:34 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:08:19.944 03:36:34 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:08:19.944 03:36:34 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:08:19.944 03:36:34 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme3 00:08:19.944 03:36:34 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:08:19.944 03:36:34 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:08:19.944 03:36:34 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:08:19.944 03:36:34 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:08:19.944 03:36:34 -- common/autotest_common.sh@1557 -- # continue 00:08:19.944 03:36:34 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:08:19.944 03:36:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:19.944 03:36:34 -- common/autotest_common.sh@10 -- # set +x 00:08:19.944 03:36:34 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:08:19.944 03:36:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:19.944 03:36:34 -- common/autotest_common.sh@10 -- # set +x 00:08:19.944 03:36:34 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:20.202 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:20.768 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:20.768 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:20.768 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:21.026 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:21.026 03:36:35 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:08:21.026 03:36:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:21.026 03:36:35 -- common/autotest_common.sh@10 -- # set +x 00:08:21.026 03:36:35 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:08:21.026 03:36:35 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:08:21.026 03:36:35 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:08:21.026 03:36:35 -- common/autotest_common.sh@1577 -- # bdfs=() 00:08:21.026 03:36:35 -- common/autotest_common.sh@1577 -- # local bdfs 00:08:21.026 03:36:35 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:08:21.026 03:36:35 -- common/autotest_common.sh@1513 -- # bdfs=() 00:08:21.026 03:36:35 -- common/autotest_common.sh@1513 -- # local bdfs 00:08:21.026 03:36:35 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:21.026 03:36:35 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:21.026 03:36:35 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:08:21.026 03:36:35 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:08:21.026 03:36:35 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:21.026 03:36:35 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:08:21.026 03:36:35 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:08:21.026 03:36:35 -- common/autotest_common.sh@1580 -- # device=0x0010 00:08:21.026 03:36:35 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:21.026 03:36:35 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:08:21.026 03:36:35 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:08:21.026 03:36:35 -- common/autotest_common.sh@1580 -- # device=0x0010 00:08:21.026 03:36:35 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:21.026 03:36:35 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:08:21.026 03:36:35 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:08:21.026 03:36:35 -- common/autotest_common.sh@1580 -- # device=0x0010 00:08:21.026 03:36:35 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:21.026 03:36:35 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:08:21.026 03:36:35 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:08:21.026 03:36:35 -- common/autotest_common.sh@1580 -- # device=0x0010 00:08:21.026 03:36:35 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:21.026 03:36:35 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:08:21.026 03:36:35 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:08:21.026 03:36:35 -- common/autotest_common.sh@1593 -- # return 0 00:08:21.026 03:36:35 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:08:21.026 03:36:35 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:08:21.026 03:36:35 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:08:21.026 03:36:35 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:08:21.026 03:36:35 -- spdk/autotest.sh@162 -- # timing_enter lib 00:08:21.026 03:36:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:21.027 03:36:35 -- common/autotest_common.sh@10 -- # set +x 00:08:21.027 03:36:35 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:08:21.027 03:36:35 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:21.027 03:36:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:21.027 03:36:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.027 03:36:35 -- common/autotest_common.sh@10 -- # set +x 00:08:21.027 ************************************ 00:08:21.027 START TEST env 00:08:21.027 ************************************ 00:08:21.027 03:36:35 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:21.286 * Looking for test storage... 00:08:21.286 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:08:21.286 03:36:35 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:21.286 03:36:35 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:21.286 03:36:35 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.286 03:36:35 env -- common/autotest_common.sh@10 -- # set +x 00:08:21.286 ************************************ 00:08:21.286 START TEST env_memory 00:08:21.286 ************************************ 00:08:21.286 03:36:35 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:21.286 00:08:21.286 00:08:21.286 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.286 http://cunit.sourceforge.net/ 00:08:21.286 00:08:21.286 00:08:21.286 Suite: memory 00:08:21.286 Test: alloc and free memory map ...[2024-07-26 03:36:36.079001] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:21.286 passed 00:08:21.286 Test: mem map translation ...[2024-07-26 03:36:36.134567] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:21.286 [2024-07-26 03:36:36.134680] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:21.286 [2024-07-26 03:36:36.134768] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:21.286 [2024-07-26 03:36:36.134797] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:21.567 passed 00:08:21.567 Test: mem map registration ...[2024-07-26 03:36:36.215068] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:08:21.567 [2024-07-26 03:36:36.215165] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:08:21.567 passed 00:08:21.567 Test: mem map adjacent registrations ...passed 00:08:21.567 00:08:21.567 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.567 suites 1 1 n/a 0 0 00:08:21.567 tests 4 4 4 0 0 00:08:21.567 asserts 152 152 152 0 n/a 00:08:21.567 00:08:21.567 Elapsed time = 0.304 seconds 00:08:21.567 00:08:21.567 real 0m0.345s 00:08:21.567 user 0m0.317s 00:08:21.567 sys 0m0.021s 00:08:21.567 03:36:36 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:21.567 03:36:36 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:21.567 ************************************ 00:08:21.567 END TEST env_memory 00:08:21.567 ************************************ 00:08:21.567 03:36:36 env -- common/autotest_common.sh@1142 -- # return 0 00:08:21.567 03:36:36 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:21.567 03:36:36 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:21.567 03:36:36 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.568 03:36:36 env -- common/autotest_common.sh@10 -- # set +x 00:08:21.568 ************************************ 00:08:21.568 START TEST env_vtophys 00:08:21.568 ************************************ 00:08:21.568 03:36:36 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:21.568 EAL: lib.eal log level changed from notice to debug 00:08:21.568 EAL: Detected lcore 0 as core 0 on socket 0 00:08:21.568 EAL: Detected lcore 1 as core 0 on socket 0 00:08:21.568 EAL: Detected lcore 2 as core 0 on socket 0 00:08:21.568 EAL: Detected lcore 3 as core 0 on socket 0 00:08:21.568 EAL: Detected lcore 4 as core 0 on socket 0 00:08:21.568 EAL: Detected lcore 5 as core 0 on socket 0 00:08:21.568 EAL: Detected lcore 6 as core 0 on socket 0 00:08:21.568 EAL: Detected lcore 7 as core 0 on socket 0 00:08:21.568 EAL: Detected lcore 8 as core 0 on socket 0 00:08:21.568 EAL: Detected lcore 9 as core 0 on socket 0 00:08:21.568 EAL: Maximum logical cores by configuration: 128 00:08:21.568 EAL: Detected CPU lcores: 10 00:08:21.568 EAL: Detected NUMA nodes: 1 00:08:21.568 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:08:21.568 EAL: Detected shared linkage of DPDK 00:08:21.568 EAL: No shared files mode enabled, IPC will be disabled 00:08:21.827 EAL: Selected IOVA mode 'PA' 00:08:21.827 EAL: Probing VFIO support... 00:08:21.827 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:21.827 EAL: VFIO modules not loaded, skipping VFIO support... 00:08:21.827 EAL: Ask a virtual area of 0x2e000 bytes 00:08:21.827 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:21.827 EAL: Setting up physically contiguous memory... 00:08:21.827 EAL: Setting maximum number of open files to 524288 00:08:21.827 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:21.827 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:21.827 EAL: Ask a virtual area of 0x61000 bytes 00:08:21.827 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:21.827 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:21.827 EAL: Ask a virtual area of 0x400000000 bytes 00:08:21.827 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:21.827 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:21.827 EAL: Ask a virtual area of 0x61000 bytes 00:08:21.827 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:21.827 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:21.827 EAL: Ask a virtual area of 0x400000000 bytes 00:08:21.827 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:21.827 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:21.827 EAL: Ask a virtual area of 0x61000 bytes 00:08:21.827 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:21.827 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:21.827 EAL: Ask a virtual area of 0x400000000 bytes 00:08:21.827 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:21.827 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:21.827 EAL: Ask a virtual area of 0x61000 bytes 00:08:21.827 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:21.827 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:21.827 EAL: Ask a virtual area of 0x400000000 bytes 00:08:21.827 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:21.827 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:21.828 EAL: Hugepages will be freed exactly as allocated. 00:08:21.828 EAL: No shared files mode enabled, IPC is disabled 00:08:21.828 EAL: No shared files mode enabled, IPC is disabled 00:08:21.828 EAL: TSC frequency is ~2200000 KHz 00:08:21.828 EAL: Main lcore 0 is ready (tid=7fdb3d81ba40;cpuset=[0]) 00:08:21.828 EAL: Trying to obtain current memory policy. 00:08:21.828 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:21.828 EAL: Restoring previous memory policy: 0 00:08:21.828 EAL: request: mp_malloc_sync 00:08:21.828 EAL: No shared files mode enabled, IPC is disabled 00:08:21.828 EAL: Heap on socket 0 was expanded by 2MB 00:08:21.828 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:21.828 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:21.828 EAL: Mem event callback 'spdk:(nil)' registered 00:08:21.828 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:08:21.828 00:08:21.828 00:08:21.828 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.828 http://cunit.sourceforge.net/ 00:08:21.828 00:08:21.828 00:08:21.828 Suite: components_suite 00:08:22.394 Test: vtophys_malloc_test ...passed 00:08:22.394 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:22.394 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:22.394 EAL: Restoring previous memory policy: 4 00:08:22.394 EAL: Calling mem event callback 'spdk:(nil)' 00:08:22.394 EAL: request: mp_malloc_sync 00:08:22.394 EAL: No shared files mode enabled, IPC is disabled 00:08:22.394 EAL: Heap on socket 0 was expanded by 4MB 00:08:22.394 EAL: Calling mem event callback 'spdk:(nil)' 00:08:22.394 EAL: request: mp_malloc_sync 00:08:22.394 EAL: No shared files mode enabled, IPC is disabled 00:08:22.394 EAL: Heap on socket 0 was shrunk by 4MB 00:08:22.394 EAL: Trying to obtain current memory policy. 00:08:22.394 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:22.394 EAL: Restoring previous memory policy: 4 00:08:22.394 EAL: Calling mem event callback 'spdk:(nil)' 00:08:22.394 EAL: request: mp_malloc_sync 00:08:22.394 EAL: No shared files mode enabled, IPC is disabled 00:08:22.394 EAL: Heap on socket 0 was expanded by 6MB 00:08:22.394 EAL: Calling mem event callback 'spdk:(nil)' 00:08:22.394 EAL: request: mp_malloc_sync 00:08:22.394 EAL: No shared files mode enabled, IPC is disabled 00:08:22.394 EAL: Heap on socket 0 was shrunk by 6MB 00:08:22.395 EAL: Trying to obtain current memory policy. 00:08:22.395 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:22.395 EAL: Restoring previous memory policy: 4 00:08:22.395 EAL: Calling mem event callback 'spdk:(nil)' 00:08:22.395 EAL: request: mp_malloc_sync 00:08:22.395 EAL: No shared files mode enabled, IPC is disabled 00:08:22.395 EAL: Heap on socket 0 was expanded by 10MB 00:08:22.395 EAL: Calling mem event callback 'spdk:(nil)' 00:08:22.395 EAL: request: mp_malloc_sync 00:08:22.395 EAL: No shared files mode enabled, IPC is disabled 00:08:22.395 EAL: Heap on socket 0 was shrunk by 10MB 00:08:22.395 EAL: Trying to obtain current memory policy. 00:08:22.395 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:22.395 EAL: Restoring previous memory policy: 4 00:08:22.395 EAL: Calling mem event callback 'spdk:(nil)' 00:08:22.395 EAL: request: mp_malloc_sync 00:08:22.395 EAL: No shared files mode enabled, IPC is disabled 00:08:22.395 EAL: Heap on socket 0 was expanded by 18MB 00:08:22.395 EAL: Calling mem event callback 'spdk:(nil)' 00:08:22.395 EAL: request: mp_malloc_sync 00:08:22.395 EAL: No shared files mode enabled, IPC is disabled 00:08:22.395 EAL: Heap on socket 0 was shrunk by 18MB 00:08:22.395 EAL: Trying to obtain current memory policy. 00:08:22.395 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:22.395 EAL: Restoring previous memory policy: 4 00:08:22.395 EAL: Calling mem event callback 'spdk:(nil)' 00:08:22.395 EAL: request: mp_malloc_sync 00:08:22.395 EAL: No shared files mode enabled, IPC is disabled 00:08:22.395 EAL: Heap on socket 0 was expanded by 34MB 00:08:22.395 EAL: Calling mem event callback 'spdk:(nil)' 00:08:22.395 EAL: request: mp_malloc_sync 00:08:22.395 EAL: No shared files mode enabled, IPC is disabled 00:08:22.395 EAL: Heap on socket 0 was shrunk by 34MB 00:08:22.395 EAL: Trying to obtain current memory policy. 00:08:22.395 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:22.395 EAL: Restoring previous memory policy: 4 00:08:22.395 EAL: Calling mem event callback 'spdk:(nil)' 00:08:22.395 EAL: request: mp_malloc_sync 00:08:22.395 EAL: No shared files mode enabled, IPC is disabled 00:08:22.395 EAL: Heap on socket 0 was expanded by 66MB 00:08:22.653 EAL: Calling mem event callback 'spdk:(nil)' 00:08:22.653 EAL: request: mp_malloc_sync 00:08:22.653 EAL: No shared files mode enabled, IPC is disabled 00:08:22.653 EAL: Heap on socket 0 was shrunk by 66MB 00:08:22.653 EAL: Trying to obtain current memory policy. 00:08:22.653 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:22.653 EAL: Restoring previous memory policy: 4 00:08:22.653 EAL: Calling mem event callback 'spdk:(nil)' 00:08:22.653 EAL: request: mp_malloc_sync 00:08:22.653 EAL: No shared files mode enabled, IPC is disabled 00:08:22.653 EAL: Heap on socket 0 was expanded by 130MB 00:08:22.911 EAL: Calling mem event callback 'spdk:(nil)' 00:08:22.911 EAL: request: mp_malloc_sync 00:08:22.911 EAL: No shared files mode enabled, IPC is disabled 00:08:22.911 EAL: Heap on socket 0 was shrunk by 130MB 00:08:23.170 EAL: Trying to obtain current memory policy. 00:08:23.170 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:23.170 EAL: Restoring previous memory policy: 4 00:08:23.170 EAL: Calling mem event callback 'spdk:(nil)' 00:08:23.170 EAL: request: mp_malloc_sync 00:08:23.170 EAL: No shared files mode enabled, IPC is disabled 00:08:23.170 EAL: Heap on socket 0 was expanded by 258MB 00:08:23.428 EAL: Calling mem event callback 'spdk:(nil)' 00:08:23.687 EAL: request: mp_malloc_sync 00:08:23.687 EAL: No shared files mode enabled, IPC is disabled 00:08:23.687 EAL: Heap on socket 0 was shrunk by 258MB 00:08:23.946 EAL: Trying to obtain current memory policy. 00:08:23.946 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:23.946 EAL: Restoring previous memory policy: 4 00:08:23.946 EAL: Calling mem event callback 'spdk:(nil)' 00:08:23.946 EAL: request: mp_malloc_sync 00:08:23.946 EAL: No shared files mode enabled, IPC is disabled 00:08:23.946 EAL: Heap on socket 0 was expanded by 514MB 00:08:24.881 EAL: Calling mem event callback 'spdk:(nil)' 00:08:24.881 EAL: request: mp_malloc_sync 00:08:24.881 EAL: No shared files mode enabled, IPC is disabled 00:08:24.881 EAL: Heap on socket 0 was shrunk by 514MB 00:08:25.816 EAL: Trying to obtain current memory policy. 00:08:25.816 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:25.816 EAL: Restoring previous memory policy: 4 00:08:25.816 EAL: Calling mem event callback 'spdk:(nil)' 00:08:25.816 EAL: request: mp_malloc_sync 00:08:25.816 EAL: No shared files mode enabled, IPC is disabled 00:08:25.816 EAL: Heap on socket 0 was expanded by 1026MB 00:08:27.716 EAL: Calling mem event callback 'spdk:(nil)' 00:08:27.716 EAL: request: mp_malloc_sync 00:08:27.716 EAL: No shared files mode enabled, IPC is disabled 00:08:27.716 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:29.088 passed 00:08:29.088 00:08:29.088 Run Summary: Type Total Ran Passed Failed Inactive 00:08:29.088 suites 1 1 n/a 0 0 00:08:29.088 tests 2 2 2 0 0 00:08:29.088 asserts 5425 5425 5425 0 n/a 00:08:29.088 00:08:29.088 Elapsed time = 7.000 seconds 00:08:29.088 EAL: Calling mem event callback 'spdk:(nil)' 00:08:29.088 EAL: request: mp_malloc_sync 00:08:29.088 EAL: No shared files mode enabled, IPC is disabled 00:08:29.088 EAL: Heap on socket 0 was shrunk by 2MB 00:08:29.088 EAL: No shared files mode enabled, IPC is disabled 00:08:29.088 EAL: No shared files mode enabled, IPC is disabled 00:08:29.088 EAL: No shared files mode enabled, IPC is disabled 00:08:29.088 00:08:29.088 real 0m7.322s 00:08:29.088 user 0m6.413s 00:08:29.088 sys 0m0.735s 00:08:29.088 03:36:43 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:29.088 ************************************ 00:08:29.088 END TEST env_vtophys 00:08:29.088 ************************************ 00:08:29.088 03:36:43 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:08:29.088 03:36:43 env -- common/autotest_common.sh@1142 -- # return 0 00:08:29.088 03:36:43 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:29.088 03:36:43 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:29.088 03:36:43 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.088 03:36:43 env -- common/autotest_common.sh@10 -- # set +x 00:08:29.088 ************************************ 00:08:29.088 START TEST env_pci 00:08:29.088 ************************************ 00:08:29.088 03:36:43 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:29.088 00:08:29.088 00:08:29.089 CUnit - A unit testing framework for C - Version 2.1-3 00:08:29.089 http://cunit.sourceforge.net/ 00:08:29.089 00:08:29.089 00:08:29.089 Suite: pci 00:08:29.089 Test: pci_hook ...[2024-07-26 03:36:43.788747] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 62394 has claimed it 00:08:29.089 passed 00:08:29.089 00:08:29.089 EAL: Cannot find device (10000:00:01.0) 00:08:29.089 EAL: Failed to attach device on primary process 00:08:29.089 Run Summary: Type Total Ran Passed Failed Inactive 00:08:29.089 suites 1 1 n/a 0 0 00:08:29.089 tests 1 1 1 0 0 00:08:29.089 asserts 25 25 25 0 n/a 00:08:29.089 00:08:29.089 Elapsed time = 0.008 seconds 00:08:29.089 00:08:29.089 real 0m0.087s 00:08:29.089 user 0m0.041s 00:08:29.089 sys 0m0.046s 00:08:29.089 03:36:43 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:29.089 ************************************ 00:08:29.089 END TEST env_pci 00:08:29.089 03:36:43 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:08:29.089 ************************************ 00:08:29.089 03:36:43 env -- common/autotest_common.sh@1142 -- # return 0 00:08:29.089 03:36:43 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:29.089 03:36:43 env -- env/env.sh@15 -- # uname 00:08:29.089 03:36:43 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:29.089 03:36:43 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:29.089 03:36:43 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:29.089 03:36:43 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:29.089 03:36:43 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.089 03:36:43 env -- common/autotest_common.sh@10 -- # set +x 00:08:29.089 ************************************ 00:08:29.089 START TEST env_dpdk_post_init 00:08:29.089 ************************************ 00:08:29.089 03:36:43 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:29.089 EAL: Detected CPU lcores: 10 00:08:29.089 EAL: Detected NUMA nodes: 1 00:08:29.089 EAL: Detected shared linkage of DPDK 00:08:29.089 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:29.089 EAL: Selected IOVA mode 'PA' 00:08:29.347 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:29.347 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:08:29.347 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:08:29.347 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:08:29.347 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:08:29.347 Starting DPDK initialization... 00:08:29.347 Starting SPDK post initialization... 00:08:29.347 SPDK NVMe probe 00:08:29.347 Attaching to 0000:00:10.0 00:08:29.347 Attaching to 0000:00:11.0 00:08:29.347 Attaching to 0000:00:12.0 00:08:29.347 Attaching to 0000:00:13.0 00:08:29.347 Attached to 0000:00:10.0 00:08:29.347 Attached to 0000:00:11.0 00:08:29.347 Attached to 0000:00:13.0 00:08:29.347 Attached to 0000:00:12.0 00:08:29.347 Cleaning up... 00:08:29.347 00:08:29.347 real 0m0.279s 00:08:29.347 user 0m0.094s 00:08:29.347 sys 0m0.085s 00:08:29.347 03:36:44 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:29.347 ************************************ 00:08:29.347 END TEST env_dpdk_post_init 00:08:29.347 ************************************ 00:08:29.347 03:36:44 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:08:29.347 03:36:44 env -- common/autotest_common.sh@1142 -- # return 0 00:08:29.347 03:36:44 env -- env/env.sh@26 -- # uname 00:08:29.347 03:36:44 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:29.348 03:36:44 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:29.348 03:36:44 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:29.348 03:36:44 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.348 03:36:44 env -- common/autotest_common.sh@10 -- # set +x 00:08:29.348 ************************************ 00:08:29.348 START TEST env_mem_callbacks 00:08:29.348 ************************************ 00:08:29.348 03:36:44 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:29.607 EAL: Detected CPU lcores: 10 00:08:29.607 EAL: Detected NUMA nodes: 1 00:08:29.607 EAL: Detected shared linkage of DPDK 00:08:29.607 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:29.607 EAL: Selected IOVA mode 'PA' 00:08:29.607 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:29.607 00:08:29.607 00:08:29.607 CUnit - A unit testing framework for C - Version 2.1-3 00:08:29.607 http://cunit.sourceforge.net/ 00:08:29.607 00:08:29.607 00:08:29.607 Suite: memory 00:08:29.607 Test: test ... 00:08:29.607 register 0x200000200000 2097152 00:08:29.607 malloc 3145728 00:08:29.607 register 0x200000400000 4194304 00:08:29.607 buf 0x2000004fffc0 len 3145728 PASSED 00:08:29.607 malloc 64 00:08:29.607 buf 0x2000004ffec0 len 64 PASSED 00:08:29.607 malloc 4194304 00:08:29.607 register 0x200000800000 6291456 00:08:29.607 buf 0x2000009fffc0 len 4194304 PASSED 00:08:29.607 free 0x2000004fffc0 3145728 00:08:29.607 free 0x2000004ffec0 64 00:08:29.607 unregister 0x200000400000 4194304 PASSED 00:08:29.607 free 0x2000009fffc0 4194304 00:08:29.607 unregister 0x200000800000 6291456 PASSED 00:08:29.607 malloc 8388608 00:08:29.607 register 0x200000400000 10485760 00:08:29.607 buf 0x2000005fffc0 len 8388608 PASSED 00:08:29.607 free 0x2000005fffc0 8388608 00:08:29.607 unregister 0x200000400000 10485760 PASSED 00:08:29.607 passed 00:08:29.607 00:08:29.607 Run Summary: Type Total Ran Passed Failed Inactive 00:08:29.607 suites 1 1 n/a 0 0 00:08:29.607 tests 1 1 1 0 0 00:08:29.607 asserts 15 15 15 0 n/a 00:08:29.607 00:08:29.607 Elapsed time = 0.061 seconds 00:08:29.607 00:08:29.607 real 0m0.255s 00:08:29.607 user 0m0.091s 00:08:29.607 sys 0m0.063s 00:08:29.607 03:36:44 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:29.607 03:36:44 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:08:29.607 ************************************ 00:08:29.607 END TEST env_mem_callbacks 00:08:29.607 ************************************ 00:08:29.866 03:36:44 env -- common/autotest_common.sh@1142 -- # return 0 00:08:29.866 00:08:29.866 real 0m8.597s 00:08:29.866 user 0m7.071s 00:08:29.866 sys 0m1.126s 00:08:29.866 03:36:44 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:29.866 ************************************ 00:08:29.866 END TEST env 00:08:29.866 03:36:44 env -- common/autotest_common.sh@10 -- # set +x 00:08:29.866 ************************************ 00:08:29.866 03:36:44 -- common/autotest_common.sh@1142 -- # return 0 00:08:29.866 03:36:44 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:29.866 03:36:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:29.866 03:36:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.866 03:36:44 -- common/autotest_common.sh@10 -- # set +x 00:08:29.866 ************************************ 00:08:29.866 START TEST rpc 00:08:29.866 ************************************ 00:08:29.866 03:36:44 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:29.866 * Looking for test storage... 00:08:29.866 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:29.866 03:36:44 rpc -- rpc/rpc.sh@65 -- # spdk_pid=62513 00:08:29.866 03:36:44 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:29.866 03:36:44 rpc -- rpc/rpc.sh@67 -- # waitforlisten 62513 00:08:29.866 03:36:44 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:08:29.866 03:36:44 rpc -- common/autotest_common.sh@829 -- # '[' -z 62513 ']' 00:08:29.866 03:36:44 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.866 03:36:44 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:29.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.866 03:36:44 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.866 03:36:44 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:29.866 03:36:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.866 [2024-07-26 03:36:44.738506] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:08:29.866 [2024-07-26 03:36:44.739206] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62513 ] 00:08:30.125 [2024-07-26 03:36:44.907861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.384 [2024-07-26 03:36:45.104974] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:30.384 [2024-07-26 03:36:45.105047] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 62513' to capture a snapshot of events at runtime. 00:08:30.384 [2024-07-26 03:36:45.105073] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:30.384 [2024-07-26 03:36:45.105088] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:30.384 [2024-07-26 03:36:45.105102] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid62513 for offline analysis/debug. 00:08:30.384 [2024-07-26 03:36:45.105157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.950 03:36:45 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:30.950 03:36:45 rpc -- common/autotest_common.sh@862 -- # return 0 00:08:30.950 03:36:45 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:30.951 03:36:45 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:30.951 03:36:45 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:30.951 03:36:45 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:30.951 03:36:45 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:30.951 03:36:45 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.951 03:36:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.951 ************************************ 00:08:30.951 START TEST rpc_integrity 00:08:30.951 ************************************ 00:08:30.951 03:36:45 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:08:30.951 03:36:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:30.951 03:36:45 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.951 03:36:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:30.951 03:36:45 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.951 03:36:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:30.951 03:36:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:31.209 03:36:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:31.209 03:36:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:31.209 03:36:45 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.209 03:36:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:31.209 03:36:45 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.209 03:36:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:31.209 03:36:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:31.209 03:36:45 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.209 03:36:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:31.209 03:36:45 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.209 03:36:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:31.209 { 00:08:31.209 "name": "Malloc0", 00:08:31.209 "aliases": [ 00:08:31.209 "6323b91a-dba9-452e-9b58-e85a90c15bd4" 00:08:31.209 ], 00:08:31.209 "product_name": "Malloc disk", 00:08:31.209 "block_size": 512, 00:08:31.209 "num_blocks": 16384, 00:08:31.209 "uuid": "6323b91a-dba9-452e-9b58-e85a90c15bd4", 00:08:31.209 "assigned_rate_limits": { 00:08:31.209 "rw_ios_per_sec": 0, 00:08:31.209 "rw_mbytes_per_sec": 0, 00:08:31.209 "r_mbytes_per_sec": 0, 00:08:31.209 "w_mbytes_per_sec": 0 00:08:31.209 }, 00:08:31.209 "claimed": false, 00:08:31.209 "zoned": false, 00:08:31.209 "supported_io_types": { 00:08:31.209 "read": true, 00:08:31.209 "write": true, 00:08:31.209 "unmap": true, 00:08:31.209 "flush": true, 00:08:31.209 "reset": true, 00:08:31.209 "nvme_admin": false, 00:08:31.209 "nvme_io": false, 00:08:31.209 "nvme_io_md": false, 00:08:31.209 "write_zeroes": true, 00:08:31.209 "zcopy": true, 00:08:31.209 "get_zone_info": false, 00:08:31.209 "zone_management": false, 00:08:31.209 "zone_append": false, 00:08:31.209 "compare": false, 00:08:31.209 "compare_and_write": false, 00:08:31.209 "abort": true, 00:08:31.209 "seek_hole": false, 00:08:31.209 "seek_data": false, 00:08:31.209 "copy": true, 00:08:31.209 "nvme_iov_md": false 00:08:31.209 }, 00:08:31.209 "memory_domains": [ 00:08:31.209 { 00:08:31.209 "dma_device_id": "system", 00:08:31.209 "dma_device_type": 1 00:08:31.209 }, 00:08:31.209 { 00:08:31.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.209 "dma_device_type": 2 00:08:31.209 } 00:08:31.209 ], 00:08:31.209 "driver_specific": {} 00:08:31.209 } 00:08:31.209 ]' 00:08:31.209 03:36:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:31.209 03:36:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:31.209 03:36:45 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:31.209 03:36:45 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.209 03:36:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:31.209 [2024-07-26 03:36:45.986201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:31.209 [2024-07-26 03:36:45.986289] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.209 [2024-07-26 03:36:45.986342] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:31.209 [2024-07-26 03:36:45.986360] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.209 [2024-07-26 03:36:45.989059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.209 [2024-07-26 03:36:45.989107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:31.209 Passthru0 00:08:31.209 03:36:45 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.209 03:36:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:31.209 03:36:45 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.209 03:36:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:31.209 03:36:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.209 03:36:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:31.209 { 00:08:31.209 "name": "Malloc0", 00:08:31.210 "aliases": [ 00:08:31.210 "6323b91a-dba9-452e-9b58-e85a90c15bd4" 00:08:31.210 ], 00:08:31.210 "product_name": "Malloc disk", 00:08:31.210 "block_size": 512, 00:08:31.210 "num_blocks": 16384, 00:08:31.210 "uuid": "6323b91a-dba9-452e-9b58-e85a90c15bd4", 00:08:31.210 "assigned_rate_limits": { 00:08:31.210 "rw_ios_per_sec": 0, 00:08:31.210 "rw_mbytes_per_sec": 0, 00:08:31.210 "r_mbytes_per_sec": 0, 00:08:31.210 "w_mbytes_per_sec": 0 00:08:31.210 }, 00:08:31.210 "claimed": true, 00:08:31.210 "claim_type": "exclusive_write", 00:08:31.210 "zoned": false, 00:08:31.210 "supported_io_types": { 00:08:31.210 "read": true, 00:08:31.210 "write": true, 00:08:31.210 "unmap": true, 00:08:31.210 "flush": true, 00:08:31.210 "reset": true, 00:08:31.210 "nvme_admin": false, 00:08:31.210 "nvme_io": false, 00:08:31.210 "nvme_io_md": false, 00:08:31.210 "write_zeroes": true, 00:08:31.210 "zcopy": true, 00:08:31.210 "get_zone_info": false, 00:08:31.210 "zone_management": false, 00:08:31.210 "zone_append": false, 00:08:31.210 "compare": false, 00:08:31.210 "compare_and_write": false, 00:08:31.210 "abort": true, 00:08:31.210 "seek_hole": false, 00:08:31.210 "seek_data": false, 00:08:31.210 "copy": true, 00:08:31.210 "nvme_iov_md": false 00:08:31.210 }, 00:08:31.210 "memory_domains": [ 00:08:31.210 { 00:08:31.210 "dma_device_id": "system", 00:08:31.210 "dma_device_type": 1 00:08:31.210 }, 00:08:31.210 { 00:08:31.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.210 "dma_device_type": 2 00:08:31.210 } 00:08:31.210 ], 00:08:31.210 "driver_specific": {} 00:08:31.210 }, 00:08:31.210 { 00:08:31.210 "name": "Passthru0", 00:08:31.210 "aliases": [ 00:08:31.210 "2bc7ca41-ceb5-5c7b-a917-7455a23faa85" 00:08:31.210 ], 00:08:31.210 "product_name": "passthru", 00:08:31.210 "block_size": 512, 00:08:31.210 "num_blocks": 16384, 00:08:31.210 "uuid": "2bc7ca41-ceb5-5c7b-a917-7455a23faa85", 00:08:31.210 "assigned_rate_limits": { 00:08:31.210 "rw_ios_per_sec": 0, 00:08:31.210 "rw_mbytes_per_sec": 0, 00:08:31.210 "r_mbytes_per_sec": 0, 00:08:31.210 "w_mbytes_per_sec": 0 00:08:31.210 }, 00:08:31.210 "claimed": false, 00:08:31.210 "zoned": false, 00:08:31.210 "supported_io_types": { 00:08:31.210 "read": true, 00:08:31.210 "write": true, 00:08:31.210 "unmap": true, 00:08:31.210 "flush": true, 00:08:31.210 "reset": true, 00:08:31.210 "nvme_admin": false, 00:08:31.210 "nvme_io": false, 00:08:31.210 "nvme_io_md": false, 00:08:31.210 "write_zeroes": true, 00:08:31.210 "zcopy": true, 00:08:31.210 "get_zone_info": false, 00:08:31.210 "zone_management": false, 00:08:31.210 "zone_append": false, 00:08:31.210 "compare": false, 00:08:31.210 "compare_and_write": false, 00:08:31.210 "abort": true, 00:08:31.210 "seek_hole": false, 00:08:31.210 "seek_data": false, 00:08:31.210 "copy": true, 00:08:31.210 "nvme_iov_md": false 00:08:31.210 }, 00:08:31.210 "memory_domains": [ 00:08:31.210 { 00:08:31.210 "dma_device_id": "system", 00:08:31.210 "dma_device_type": 1 00:08:31.210 }, 00:08:31.210 { 00:08:31.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.210 "dma_device_type": 2 00:08:31.210 } 00:08:31.210 ], 00:08:31.210 "driver_specific": { 00:08:31.210 "passthru": { 00:08:31.210 "name": "Passthru0", 00:08:31.210 "base_bdev_name": "Malloc0" 00:08:31.210 } 00:08:31.210 } 00:08:31.210 } 00:08:31.210 ]' 00:08:31.210 03:36:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:31.210 03:36:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:31.210 03:36:46 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:31.210 03:36:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.210 03:36:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:31.210 03:36:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.210 03:36:46 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:31.210 03:36:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.210 03:36:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:31.210 03:36:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.210 03:36:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:31.210 03:36:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.210 03:36:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:31.468 03:36:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.468 03:36:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:31.468 03:36:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:31.468 03:36:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:31.468 00:08:31.468 real 0m0.342s 00:08:31.468 user 0m0.208s 00:08:31.468 sys 0m0.041s 00:08:31.468 03:36:46 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.468 03:36:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:31.468 ************************************ 00:08:31.468 END TEST rpc_integrity 00:08:31.468 ************************************ 00:08:31.468 03:36:46 rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:31.468 03:36:46 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:31.468 03:36:46 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:31.468 03:36:46 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.468 03:36:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.468 ************************************ 00:08:31.468 START TEST rpc_plugins 00:08:31.468 ************************************ 00:08:31.468 03:36:46 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:08:31.468 03:36:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:31.468 03:36:46 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.468 03:36:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:31.468 03:36:46 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.468 03:36:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:31.468 03:36:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:31.468 03:36:46 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.468 03:36:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:31.468 03:36:46 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.468 03:36:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:31.468 { 00:08:31.468 "name": "Malloc1", 00:08:31.468 "aliases": [ 00:08:31.468 "7df00a6e-705c-4e72-a5ec-a3838f06a9cf" 00:08:31.468 ], 00:08:31.468 "product_name": "Malloc disk", 00:08:31.468 "block_size": 4096, 00:08:31.468 "num_blocks": 256, 00:08:31.468 "uuid": "7df00a6e-705c-4e72-a5ec-a3838f06a9cf", 00:08:31.468 "assigned_rate_limits": { 00:08:31.468 "rw_ios_per_sec": 0, 00:08:31.468 "rw_mbytes_per_sec": 0, 00:08:31.468 "r_mbytes_per_sec": 0, 00:08:31.468 "w_mbytes_per_sec": 0 00:08:31.468 }, 00:08:31.468 "claimed": false, 00:08:31.468 "zoned": false, 00:08:31.468 "supported_io_types": { 00:08:31.468 "read": true, 00:08:31.468 "write": true, 00:08:31.468 "unmap": true, 00:08:31.468 "flush": true, 00:08:31.468 "reset": true, 00:08:31.468 "nvme_admin": false, 00:08:31.468 "nvme_io": false, 00:08:31.468 "nvme_io_md": false, 00:08:31.468 "write_zeroes": true, 00:08:31.468 "zcopy": true, 00:08:31.468 "get_zone_info": false, 00:08:31.468 "zone_management": false, 00:08:31.468 "zone_append": false, 00:08:31.468 "compare": false, 00:08:31.468 "compare_and_write": false, 00:08:31.468 "abort": true, 00:08:31.468 "seek_hole": false, 00:08:31.468 "seek_data": false, 00:08:31.468 "copy": true, 00:08:31.468 "nvme_iov_md": false 00:08:31.468 }, 00:08:31.468 "memory_domains": [ 00:08:31.468 { 00:08:31.468 "dma_device_id": "system", 00:08:31.468 "dma_device_type": 1 00:08:31.468 }, 00:08:31.468 { 00:08:31.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.468 "dma_device_type": 2 00:08:31.468 } 00:08:31.468 ], 00:08:31.468 "driver_specific": {} 00:08:31.468 } 00:08:31.468 ]' 00:08:31.468 03:36:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:08:31.468 03:36:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:31.468 03:36:46 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:31.468 03:36:46 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.468 03:36:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:31.468 03:36:46 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.468 03:36:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:31.468 03:36:46 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.468 03:36:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:31.468 03:36:46 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.468 03:36:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:31.468 03:36:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:08:31.727 03:36:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:31.727 00:08:31.727 real 0m0.171s 00:08:31.727 user 0m0.109s 00:08:31.727 sys 0m0.019s 00:08:31.727 03:36:46 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.727 03:36:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:31.727 ************************************ 00:08:31.727 END TEST rpc_plugins 00:08:31.727 ************************************ 00:08:31.727 03:36:46 rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:31.727 03:36:46 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:31.727 03:36:46 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:31.727 03:36:46 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.727 03:36:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.727 ************************************ 00:08:31.727 START TEST rpc_trace_cmd_test 00:08:31.727 ************************************ 00:08:31.727 03:36:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:08:31.727 03:36:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:08:31.727 03:36:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:31.727 03:36:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.727 03:36:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.727 03:36:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.727 03:36:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:08:31.727 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid62513", 00:08:31.727 "tpoint_group_mask": "0x8", 00:08:31.727 "iscsi_conn": { 00:08:31.727 "mask": "0x2", 00:08:31.727 "tpoint_mask": "0x0" 00:08:31.727 }, 00:08:31.727 "scsi": { 00:08:31.727 "mask": "0x4", 00:08:31.727 "tpoint_mask": "0x0" 00:08:31.727 }, 00:08:31.727 "bdev": { 00:08:31.727 "mask": "0x8", 00:08:31.727 "tpoint_mask": "0xffffffffffffffff" 00:08:31.727 }, 00:08:31.727 "nvmf_rdma": { 00:08:31.727 "mask": "0x10", 00:08:31.727 "tpoint_mask": "0x0" 00:08:31.727 }, 00:08:31.727 "nvmf_tcp": { 00:08:31.727 "mask": "0x20", 00:08:31.727 "tpoint_mask": "0x0" 00:08:31.727 }, 00:08:31.727 "ftl": { 00:08:31.727 "mask": "0x40", 00:08:31.727 "tpoint_mask": "0x0" 00:08:31.727 }, 00:08:31.727 "blobfs": { 00:08:31.727 "mask": "0x80", 00:08:31.727 "tpoint_mask": "0x0" 00:08:31.727 }, 00:08:31.727 "dsa": { 00:08:31.727 "mask": "0x200", 00:08:31.727 "tpoint_mask": "0x0" 00:08:31.727 }, 00:08:31.727 "thread": { 00:08:31.727 "mask": "0x400", 00:08:31.727 "tpoint_mask": "0x0" 00:08:31.727 }, 00:08:31.727 "nvme_pcie": { 00:08:31.727 "mask": "0x800", 00:08:31.727 "tpoint_mask": "0x0" 00:08:31.727 }, 00:08:31.727 "iaa": { 00:08:31.727 "mask": "0x1000", 00:08:31.727 "tpoint_mask": "0x0" 00:08:31.727 }, 00:08:31.727 "nvme_tcp": { 00:08:31.727 "mask": "0x2000", 00:08:31.727 "tpoint_mask": "0x0" 00:08:31.727 }, 00:08:31.727 "bdev_nvme": { 00:08:31.727 "mask": "0x4000", 00:08:31.727 "tpoint_mask": "0x0" 00:08:31.727 }, 00:08:31.727 "sock": { 00:08:31.727 "mask": "0x8000", 00:08:31.727 "tpoint_mask": "0x0" 00:08:31.727 } 00:08:31.727 }' 00:08:31.727 03:36:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:08:31.727 03:36:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:08:31.727 03:36:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:31.727 03:36:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:31.727 03:36:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:31.727 03:36:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:31.727 03:36:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:31.986 03:36:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:31.986 03:36:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:31.986 03:36:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:31.986 00:08:31.986 real 0m0.289s 00:08:31.986 user 0m0.249s 00:08:31.986 sys 0m0.028s 00:08:31.986 03:36:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.986 03:36:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.986 ************************************ 00:08:31.986 END TEST rpc_trace_cmd_test 00:08:31.986 ************************************ 00:08:31.986 03:36:46 rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:31.986 03:36:46 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:31.986 03:36:46 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:31.986 03:36:46 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:31.986 03:36:46 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:31.986 03:36:46 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.986 03:36:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.986 ************************************ 00:08:31.986 START TEST rpc_daemon_integrity 00:08:31.986 ************************************ 00:08:31.986 03:36:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:08:31.986 03:36:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:31.986 03:36:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.986 03:36:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:31.986 03:36:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.986 03:36:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:31.986 03:36:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:31.986 03:36:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:31.986 03:36:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:31.986 03:36:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.986 03:36:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:31.986 03:36:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.986 03:36:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:31.986 03:36:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:31.986 03:36:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.986 03:36:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:31.986 03:36:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.986 03:36:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:31.986 { 00:08:31.986 "name": "Malloc2", 00:08:31.986 "aliases": [ 00:08:31.986 "5d429026-4fe5-4ee4-a7a1-b4f809ba7505" 00:08:31.986 ], 00:08:31.986 "product_name": "Malloc disk", 00:08:31.986 "block_size": 512, 00:08:31.986 "num_blocks": 16384, 00:08:31.986 "uuid": "5d429026-4fe5-4ee4-a7a1-b4f809ba7505", 00:08:31.986 "assigned_rate_limits": { 00:08:31.986 "rw_ios_per_sec": 0, 00:08:31.986 "rw_mbytes_per_sec": 0, 00:08:31.986 "r_mbytes_per_sec": 0, 00:08:31.986 "w_mbytes_per_sec": 0 00:08:31.986 }, 00:08:31.986 "claimed": false, 00:08:31.986 "zoned": false, 00:08:31.986 "supported_io_types": { 00:08:31.986 "read": true, 00:08:31.986 "write": true, 00:08:31.986 "unmap": true, 00:08:31.986 "flush": true, 00:08:31.986 "reset": true, 00:08:31.986 "nvme_admin": false, 00:08:31.986 "nvme_io": false, 00:08:31.986 "nvme_io_md": false, 00:08:31.986 "write_zeroes": true, 00:08:31.986 "zcopy": true, 00:08:31.986 "get_zone_info": false, 00:08:31.986 "zone_management": false, 00:08:31.986 "zone_append": false, 00:08:31.986 "compare": false, 00:08:31.986 "compare_and_write": false, 00:08:31.986 "abort": true, 00:08:31.986 "seek_hole": false, 00:08:31.986 "seek_data": false, 00:08:31.986 "copy": true, 00:08:31.986 "nvme_iov_md": false 00:08:31.986 }, 00:08:31.986 "memory_domains": [ 00:08:31.986 { 00:08:31.986 "dma_device_id": "system", 00:08:31.986 "dma_device_type": 1 00:08:31.986 }, 00:08:31.986 { 00:08:31.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.986 "dma_device_type": 2 00:08:31.986 } 00:08:31.986 ], 00:08:31.986 "driver_specific": {} 00:08:31.986 } 00:08:31.986 ]' 00:08:31.986 03:36:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:32.246 03:36:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:32.246 03:36:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:32.246 03:36:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.246 03:36:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:32.246 [2024-07-26 03:36:46.938888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:32.246 [2024-07-26 03:36:46.938969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.246 [2024-07-26 03:36:46.939012] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:32.246 [2024-07-26 03:36:46.939029] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.246 [2024-07-26 03:36:46.942242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.246 [2024-07-26 03:36:46.942287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:32.246 Passthru0 00:08:32.246 03:36:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.246 03:36:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:32.246 03:36:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.246 03:36:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:32.246 03:36:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.246 03:36:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:32.246 { 00:08:32.246 "name": "Malloc2", 00:08:32.246 "aliases": [ 00:08:32.246 "5d429026-4fe5-4ee4-a7a1-b4f809ba7505" 00:08:32.246 ], 00:08:32.246 "product_name": "Malloc disk", 00:08:32.246 "block_size": 512, 00:08:32.246 "num_blocks": 16384, 00:08:32.246 "uuid": "5d429026-4fe5-4ee4-a7a1-b4f809ba7505", 00:08:32.246 "assigned_rate_limits": { 00:08:32.246 "rw_ios_per_sec": 0, 00:08:32.246 "rw_mbytes_per_sec": 0, 00:08:32.246 "r_mbytes_per_sec": 0, 00:08:32.246 "w_mbytes_per_sec": 0 00:08:32.246 }, 00:08:32.246 "claimed": true, 00:08:32.246 "claim_type": "exclusive_write", 00:08:32.246 "zoned": false, 00:08:32.246 "supported_io_types": { 00:08:32.246 "read": true, 00:08:32.246 "write": true, 00:08:32.246 "unmap": true, 00:08:32.246 "flush": true, 00:08:32.246 "reset": true, 00:08:32.246 "nvme_admin": false, 00:08:32.246 "nvme_io": false, 00:08:32.246 "nvme_io_md": false, 00:08:32.246 "write_zeroes": true, 00:08:32.246 "zcopy": true, 00:08:32.246 "get_zone_info": false, 00:08:32.246 "zone_management": false, 00:08:32.246 "zone_append": false, 00:08:32.246 "compare": false, 00:08:32.246 "compare_and_write": false, 00:08:32.246 "abort": true, 00:08:32.246 "seek_hole": false, 00:08:32.246 "seek_data": false, 00:08:32.246 "copy": true, 00:08:32.246 "nvme_iov_md": false 00:08:32.246 }, 00:08:32.246 "memory_domains": [ 00:08:32.246 { 00:08:32.246 "dma_device_id": "system", 00:08:32.246 "dma_device_type": 1 00:08:32.246 }, 00:08:32.246 { 00:08:32.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.246 "dma_device_type": 2 00:08:32.246 } 00:08:32.246 ], 00:08:32.246 "driver_specific": {} 00:08:32.246 }, 00:08:32.246 { 00:08:32.246 "name": "Passthru0", 00:08:32.246 "aliases": [ 00:08:32.246 "c0342dfe-cac2-5787-aed8-d01b407b4c30" 00:08:32.246 ], 00:08:32.246 "product_name": "passthru", 00:08:32.246 "block_size": 512, 00:08:32.246 "num_blocks": 16384, 00:08:32.246 "uuid": "c0342dfe-cac2-5787-aed8-d01b407b4c30", 00:08:32.246 "assigned_rate_limits": { 00:08:32.246 "rw_ios_per_sec": 0, 00:08:32.246 "rw_mbytes_per_sec": 0, 00:08:32.246 "r_mbytes_per_sec": 0, 00:08:32.246 "w_mbytes_per_sec": 0 00:08:32.246 }, 00:08:32.246 "claimed": false, 00:08:32.246 "zoned": false, 00:08:32.246 "supported_io_types": { 00:08:32.246 "read": true, 00:08:32.246 "write": true, 00:08:32.246 "unmap": true, 00:08:32.246 "flush": true, 00:08:32.246 "reset": true, 00:08:32.246 "nvme_admin": false, 00:08:32.246 "nvme_io": false, 00:08:32.246 "nvme_io_md": false, 00:08:32.246 "write_zeroes": true, 00:08:32.246 "zcopy": true, 00:08:32.246 "get_zone_info": false, 00:08:32.246 "zone_management": false, 00:08:32.246 "zone_append": false, 00:08:32.246 "compare": false, 00:08:32.246 "compare_and_write": false, 00:08:32.246 "abort": true, 00:08:32.246 "seek_hole": false, 00:08:32.246 "seek_data": false, 00:08:32.246 "copy": true, 00:08:32.246 "nvme_iov_md": false 00:08:32.246 }, 00:08:32.246 "memory_domains": [ 00:08:32.246 { 00:08:32.246 "dma_device_id": "system", 00:08:32.247 "dma_device_type": 1 00:08:32.247 }, 00:08:32.247 { 00:08:32.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.247 "dma_device_type": 2 00:08:32.247 } 00:08:32.247 ], 00:08:32.247 "driver_specific": { 00:08:32.247 "passthru": { 00:08:32.247 "name": "Passthru0", 00:08:32.247 "base_bdev_name": "Malloc2" 00:08:32.247 } 00:08:32.247 } 00:08:32.247 } 00:08:32.247 ]' 00:08:32.247 03:36:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:32.247 03:36:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:32.247 03:36:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:32.247 03:36:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.247 03:36:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:32.247 03:36:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.247 03:36:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:32.247 03:36:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.247 03:36:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:32.247 03:36:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.247 03:36:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:32.247 03:36:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.247 03:36:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:32.247 03:36:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.247 03:36:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:32.247 03:36:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:32.247 03:36:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:32.247 00:08:32.247 real 0m0.345s 00:08:32.247 user 0m0.214s 00:08:32.247 sys 0m0.034s 00:08:32.247 03:36:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:32.247 03:36:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:32.247 ************************************ 00:08:32.247 END TEST rpc_daemon_integrity 00:08:32.247 ************************************ 00:08:32.505 03:36:47 rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:32.505 03:36:47 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:32.505 03:36:47 rpc -- rpc/rpc.sh@84 -- # killprocess 62513 00:08:32.505 03:36:47 rpc -- common/autotest_common.sh@948 -- # '[' -z 62513 ']' 00:08:32.505 03:36:47 rpc -- common/autotest_common.sh@952 -- # kill -0 62513 00:08:32.505 03:36:47 rpc -- common/autotest_common.sh@953 -- # uname 00:08:32.505 03:36:47 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:32.506 03:36:47 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62513 00:08:32.506 03:36:47 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:32.506 03:36:47 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:32.506 03:36:47 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62513' 00:08:32.506 killing process with pid 62513 00:08:32.506 03:36:47 rpc -- common/autotest_common.sh@967 -- # kill 62513 00:08:32.506 03:36:47 rpc -- common/autotest_common.sh@972 -- # wait 62513 00:08:35.036 00:08:35.036 real 0m4.809s 00:08:35.036 user 0m5.643s 00:08:35.036 sys 0m0.663s 00:08:35.036 03:36:49 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:35.036 ************************************ 00:08:35.036 END TEST rpc 00:08:35.036 ************************************ 00:08:35.036 03:36:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:35.036 03:36:49 -- common/autotest_common.sh@1142 -- # return 0 00:08:35.037 03:36:49 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:35.037 03:36:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:35.037 03:36:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.037 03:36:49 -- common/autotest_common.sh@10 -- # set +x 00:08:35.037 ************************************ 00:08:35.037 START TEST skip_rpc 00:08:35.037 ************************************ 00:08:35.037 03:36:49 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:35.037 * Looking for test storage... 00:08:35.037 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:35.037 03:36:49 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:35.037 03:36:49 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:35.037 03:36:49 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:08:35.037 03:36:49 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:35.037 03:36:49 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.037 03:36:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:35.037 ************************************ 00:08:35.037 START TEST skip_rpc 00:08:35.037 ************************************ 00:08:35.037 03:36:49 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:08:35.037 03:36:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=62729 00:08:35.037 03:36:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:08:35.037 03:36:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:35.037 03:36:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:08:35.037 [2024-07-26 03:36:49.640230] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:08:35.037 [2024-07-26 03:36:49.640473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62729 ] 00:08:35.037 [2024-07-26 03:36:49.827037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.295 [2024-07-26 03:36:50.065021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.561 03:36:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:08:40.561 03:36:54 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:40.561 03:36:54 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:08:40.561 03:36:54 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:08:40.561 03:36:54 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:40.561 03:36:54 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:08:40.561 03:36:54 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:40.561 03:36:54 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:08:40.561 03:36:54 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.561 03:36:54 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.561 03:36:54 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:08:40.561 03:36:54 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:40.561 03:36:54 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:40.561 03:36:54 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:40.561 03:36:54 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:40.561 03:36:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:08:40.561 03:36:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 62729 00:08:40.561 03:36:54 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 62729 ']' 00:08:40.561 03:36:54 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 62729 00:08:40.561 03:36:54 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:08:40.561 03:36:54 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:40.561 03:36:54 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62729 00:08:40.561 03:36:54 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:40.561 03:36:54 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:40.561 killing process with pid 62729 00:08:40.561 03:36:54 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62729' 00:08:40.561 03:36:54 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 62729 00:08:40.561 03:36:54 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 62729 00:08:41.935 00:08:41.935 real 0m7.165s 00:08:41.935 user 0m6.676s 00:08:41.935 sys 0m0.362s 00:08:41.935 03:36:56 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:41.935 ************************************ 00:08:41.935 END TEST skip_rpc 00:08:41.935 ************************************ 00:08:41.935 03:36:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.935 03:36:56 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:41.935 03:36:56 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:08:41.935 03:36:56 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:41.935 03:36:56 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:41.935 03:36:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.935 ************************************ 00:08:41.935 START TEST skip_rpc_with_json 00:08:41.935 ************************************ 00:08:41.935 03:36:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:08:41.935 03:36:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:08:41.935 03:36:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=62833 00:08:41.935 03:36:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:41.935 03:36:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:41.935 03:36:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 62833 00:08:41.935 03:36:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 62833 ']' 00:08:41.935 03:36:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.935 03:36:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:41.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.935 03:36:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.935 03:36:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:41.935 03:36:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:41.935 [2024-07-26 03:36:56.814653] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:08:41.935 [2024-07-26 03:36:56.814831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62833 ] 00:08:42.233 [2024-07-26 03:36:56.980713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.491 [2024-07-26 03:36:57.171339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.057 03:36:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:43.057 03:36:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:08:43.057 03:36:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:08:43.057 03:36:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.057 03:36:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:43.057 [2024-07-26 03:36:57.872651] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:08:43.057 request: 00:08:43.057 { 00:08:43.057 "trtype": "tcp", 00:08:43.057 "method": "nvmf_get_transports", 00:08:43.057 "req_id": 1 00:08:43.057 } 00:08:43.057 Got JSON-RPC error response 00:08:43.057 response: 00:08:43.057 { 00:08:43.057 "code": -19, 00:08:43.057 "message": "No such device" 00:08:43.057 } 00:08:43.057 03:36:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:08:43.057 03:36:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:08:43.057 03:36:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.057 03:36:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:43.058 [2024-07-26 03:36:57.884852] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:43.058 03:36:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.058 03:36:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:08:43.058 03:36:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.058 03:36:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:43.316 03:36:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.316 03:36:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:43.316 { 00:08:43.316 "subsystems": [ 00:08:43.316 { 00:08:43.316 "subsystem": "keyring", 00:08:43.316 "config": [] 00:08:43.317 }, 00:08:43.317 { 00:08:43.317 "subsystem": "iobuf", 00:08:43.317 "config": [ 00:08:43.317 { 00:08:43.317 "method": "iobuf_set_options", 00:08:43.317 "params": { 00:08:43.317 "small_pool_count": 8192, 00:08:43.317 "large_pool_count": 1024, 00:08:43.317 "small_bufsize": 8192, 00:08:43.317 "large_bufsize": 135168 00:08:43.317 } 00:08:43.317 } 00:08:43.317 ] 00:08:43.317 }, 00:08:43.317 { 00:08:43.317 "subsystem": "sock", 00:08:43.317 "config": [ 00:08:43.317 { 00:08:43.317 "method": "sock_set_default_impl", 00:08:43.317 "params": { 00:08:43.317 "impl_name": "posix" 00:08:43.317 } 00:08:43.317 }, 00:08:43.317 { 00:08:43.317 "method": "sock_impl_set_options", 00:08:43.317 "params": { 00:08:43.317 "impl_name": "ssl", 00:08:43.317 "recv_buf_size": 4096, 00:08:43.317 "send_buf_size": 4096, 00:08:43.317 "enable_recv_pipe": true, 00:08:43.317 "enable_quickack": false, 00:08:43.317 "enable_placement_id": 0, 00:08:43.317 "enable_zerocopy_send_server": true, 00:08:43.317 "enable_zerocopy_send_client": false, 00:08:43.317 "zerocopy_threshold": 0, 00:08:43.317 "tls_version": 0, 00:08:43.317 "enable_ktls": false 00:08:43.317 } 00:08:43.317 }, 00:08:43.317 { 00:08:43.317 "method": "sock_impl_set_options", 00:08:43.317 "params": { 00:08:43.317 "impl_name": "posix", 00:08:43.317 "recv_buf_size": 2097152, 00:08:43.317 "send_buf_size": 2097152, 00:08:43.317 "enable_recv_pipe": true, 00:08:43.317 "enable_quickack": false, 00:08:43.317 "enable_placement_id": 0, 00:08:43.317 "enable_zerocopy_send_server": true, 00:08:43.317 "enable_zerocopy_send_client": false, 00:08:43.317 "zerocopy_threshold": 0, 00:08:43.317 "tls_version": 0, 00:08:43.317 "enable_ktls": false 00:08:43.317 } 00:08:43.317 } 00:08:43.317 ] 00:08:43.317 }, 00:08:43.317 { 00:08:43.317 "subsystem": "vmd", 00:08:43.317 "config": [] 00:08:43.317 }, 00:08:43.317 { 00:08:43.317 "subsystem": "accel", 00:08:43.317 "config": [ 00:08:43.317 { 00:08:43.317 "method": "accel_set_options", 00:08:43.317 "params": { 00:08:43.317 "small_cache_size": 128, 00:08:43.317 "large_cache_size": 16, 00:08:43.317 "task_count": 2048, 00:08:43.317 "sequence_count": 2048, 00:08:43.317 "buf_count": 2048 00:08:43.317 } 00:08:43.317 } 00:08:43.317 ] 00:08:43.317 }, 00:08:43.317 { 00:08:43.317 "subsystem": "bdev", 00:08:43.317 "config": [ 00:08:43.317 { 00:08:43.317 "method": "bdev_set_options", 00:08:43.317 "params": { 00:08:43.317 "bdev_io_pool_size": 65535, 00:08:43.317 "bdev_io_cache_size": 256, 00:08:43.317 "bdev_auto_examine": true, 00:08:43.317 "iobuf_small_cache_size": 128, 00:08:43.317 "iobuf_large_cache_size": 16 00:08:43.317 } 00:08:43.317 }, 00:08:43.317 { 00:08:43.317 "method": "bdev_raid_set_options", 00:08:43.317 "params": { 00:08:43.317 "process_window_size_kb": 1024, 00:08:43.317 "process_max_bandwidth_mb_sec": 0 00:08:43.317 } 00:08:43.317 }, 00:08:43.317 { 00:08:43.317 "method": "bdev_iscsi_set_options", 00:08:43.317 "params": { 00:08:43.317 "timeout_sec": 30 00:08:43.317 } 00:08:43.317 }, 00:08:43.317 { 00:08:43.317 "method": "bdev_nvme_set_options", 00:08:43.317 "params": { 00:08:43.317 "action_on_timeout": "none", 00:08:43.317 "timeout_us": 0, 00:08:43.317 "timeout_admin_us": 0, 00:08:43.317 "keep_alive_timeout_ms": 10000, 00:08:43.317 "arbitration_burst": 0, 00:08:43.317 "low_priority_weight": 0, 00:08:43.317 "medium_priority_weight": 0, 00:08:43.317 "high_priority_weight": 0, 00:08:43.317 "nvme_adminq_poll_period_us": 10000, 00:08:43.317 "nvme_ioq_poll_period_us": 0, 00:08:43.317 "io_queue_requests": 0, 00:08:43.317 "delay_cmd_submit": true, 00:08:43.317 "transport_retry_count": 4, 00:08:43.317 "bdev_retry_count": 3, 00:08:43.317 "transport_ack_timeout": 0, 00:08:43.317 "ctrlr_loss_timeout_sec": 0, 00:08:43.317 "reconnect_delay_sec": 0, 00:08:43.317 "fast_io_fail_timeout_sec": 0, 00:08:43.317 "disable_auto_failback": false, 00:08:43.317 "generate_uuids": false, 00:08:43.317 "transport_tos": 0, 00:08:43.317 "nvme_error_stat": false, 00:08:43.317 "rdma_srq_size": 0, 00:08:43.317 "io_path_stat": false, 00:08:43.317 "allow_accel_sequence": false, 00:08:43.317 "rdma_max_cq_size": 0, 00:08:43.317 "rdma_cm_event_timeout_ms": 0, 00:08:43.317 "dhchap_digests": [ 00:08:43.317 "sha256", 00:08:43.317 "sha384", 00:08:43.317 "sha512" 00:08:43.317 ], 00:08:43.317 "dhchap_dhgroups": [ 00:08:43.317 "null", 00:08:43.317 "ffdhe2048", 00:08:43.317 "ffdhe3072", 00:08:43.317 "ffdhe4096", 00:08:43.317 "ffdhe6144", 00:08:43.317 "ffdhe8192" 00:08:43.317 ] 00:08:43.317 } 00:08:43.317 }, 00:08:43.317 { 00:08:43.317 "method": "bdev_nvme_set_hotplug", 00:08:43.317 "params": { 00:08:43.317 "period_us": 100000, 00:08:43.317 "enable": false 00:08:43.317 } 00:08:43.317 }, 00:08:43.317 { 00:08:43.317 "method": "bdev_wait_for_examine" 00:08:43.317 } 00:08:43.317 ] 00:08:43.317 }, 00:08:43.317 { 00:08:43.317 "subsystem": "scsi", 00:08:43.317 "config": null 00:08:43.317 }, 00:08:43.317 { 00:08:43.317 "subsystem": "scheduler", 00:08:43.317 "config": [ 00:08:43.317 { 00:08:43.317 "method": "framework_set_scheduler", 00:08:43.317 "params": { 00:08:43.317 "name": "static" 00:08:43.317 } 00:08:43.317 } 00:08:43.317 ] 00:08:43.317 }, 00:08:43.317 { 00:08:43.317 "subsystem": "vhost_scsi", 00:08:43.317 "config": [] 00:08:43.317 }, 00:08:43.317 { 00:08:43.317 "subsystem": "vhost_blk", 00:08:43.317 "config": [] 00:08:43.317 }, 00:08:43.317 { 00:08:43.317 "subsystem": "ublk", 00:08:43.317 "config": [] 00:08:43.317 }, 00:08:43.317 { 00:08:43.317 "subsystem": "nbd", 00:08:43.317 "config": [] 00:08:43.317 }, 00:08:43.317 { 00:08:43.317 "subsystem": "nvmf", 00:08:43.317 "config": [ 00:08:43.317 { 00:08:43.317 "method": "nvmf_set_config", 00:08:43.317 "params": { 00:08:43.317 "discovery_filter": "match_any", 00:08:43.317 "admin_cmd_passthru": { 00:08:43.317 "identify_ctrlr": false 00:08:43.317 } 00:08:43.317 } 00:08:43.317 }, 00:08:43.317 { 00:08:43.317 "method": "nvmf_set_max_subsystems", 00:08:43.317 "params": { 00:08:43.317 "max_subsystems": 1024 00:08:43.317 } 00:08:43.317 }, 00:08:43.317 { 00:08:43.317 "method": "nvmf_set_crdt", 00:08:43.317 "params": { 00:08:43.317 "crdt1": 0, 00:08:43.317 "crdt2": 0, 00:08:43.317 "crdt3": 0 00:08:43.317 } 00:08:43.317 }, 00:08:43.317 { 00:08:43.317 "method": "nvmf_create_transport", 00:08:43.317 "params": { 00:08:43.317 "trtype": "TCP", 00:08:43.317 "max_queue_depth": 128, 00:08:43.317 "max_io_qpairs_per_ctrlr": 127, 00:08:43.317 "in_capsule_data_size": 4096, 00:08:43.317 "max_io_size": 131072, 00:08:43.317 "io_unit_size": 131072, 00:08:43.317 "max_aq_depth": 128, 00:08:43.317 "num_shared_buffers": 511, 00:08:43.317 "buf_cache_size": 4294967295, 00:08:43.317 "dif_insert_or_strip": false, 00:08:43.317 "zcopy": false, 00:08:43.317 "c2h_success": true, 00:08:43.317 "sock_priority": 0, 00:08:43.317 "abort_timeout_sec": 1, 00:08:43.317 "ack_timeout": 0, 00:08:43.317 "data_wr_pool_size": 0 00:08:43.317 } 00:08:43.317 } 00:08:43.317 ] 00:08:43.317 }, 00:08:43.317 { 00:08:43.317 "subsystem": "iscsi", 00:08:43.317 "config": [ 00:08:43.317 { 00:08:43.317 "method": "iscsi_set_options", 00:08:43.317 "params": { 00:08:43.317 "node_base": "iqn.2016-06.io.spdk", 00:08:43.317 "max_sessions": 128, 00:08:43.317 "max_connections_per_session": 2, 00:08:43.317 "max_queue_depth": 64, 00:08:43.317 "default_time2wait": 2, 00:08:43.317 "default_time2retain": 20, 00:08:43.317 "first_burst_length": 8192, 00:08:43.317 "immediate_data": true, 00:08:43.317 "allow_duplicated_isid": false, 00:08:43.317 "error_recovery_level": 0, 00:08:43.317 "nop_timeout": 60, 00:08:43.317 "nop_in_interval": 30, 00:08:43.317 "disable_chap": false, 00:08:43.317 "require_chap": false, 00:08:43.317 "mutual_chap": false, 00:08:43.317 "chap_group": 0, 00:08:43.317 "max_large_datain_per_connection": 64, 00:08:43.317 "max_r2t_per_connection": 4, 00:08:43.317 "pdu_pool_size": 36864, 00:08:43.317 "immediate_data_pool_size": 16384, 00:08:43.317 "data_out_pool_size": 2048 00:08:43.317 } 00:08:43.317 } 00:08:43.317 ] 00:08:43.317 } 00:08:43.317 ] 00:08:43.317 } 00:08:43.317 03:36:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:43.317 03:36:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 62833 00:08:43.317 03:36:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 62833 ']' 00:08:43.318 03:36:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 62833 00:08:43.318 03:36:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:08:43.318 03:36:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:43.318 03:36:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62833 00:08:43.318 03:36:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:43.318 03:36:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:43.318 03:36:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62833' 00:08:43.318 killing process with pid 62833 00:08:43.318 03:36:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 62833 00:08:43.318 03:36:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 62833 00:08:45.874 03:37:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=62884 00:08:45.874 03:37:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:45.874 03:37:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:08:51.135 03:37:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 62884 00:08:51.135 03:37:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 62884 ']' 00:08:51.135 03:37:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 62884 00:08:51.135 03:37:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:08:51.135 03:37:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:51.135 03:37:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62884 00:08:51.135 03:37:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:51.135 03:37:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:51.135 03:37:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62884' 00:08:51.135 killing process with pid 62884 00:08:51.135 03:37:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 62884 00:08:51.135 03:37:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 62884 00:08:53.038 03:37:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:53.038 03:37:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:53.038 00:08:53.038 real 0m10.818s 00:08:53.038 user 0m10.561s 00:08:53.038 sys 0m0.705s 00:08:53.038 03:37:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:53.038 03:37:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:53.038 ************************************ 00:08:53.038 END TEST skip_rpc_with_json 00:08:53.038 ************************************ 00:08:53.038 03:37:07 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:53.038 03:37:07 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:53.038 03:37:07 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:53.038 03:37:07 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:53.038 03:37:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.038 ************************************ 00:08:53.038 START TEST skip_rpc_with_delay 00:08:53.038 ************************************ 00:08:53.038 03:37:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:08:53.038 03:37:07 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:53.038 03:37:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:08:53.038 03:37:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:53.038 03:37:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:53.038 03:37:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:53.038 03:37:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:53.038 03:37:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:53.038 03:37:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:53.038 03:37:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:53.038 03:37:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:53.038 03:37:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:53.038 03:37:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:53.038 [2024-07-26 03:37:07.692803] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:53.038 [2024-07-26 03:37:07.693020] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:08:53.038 03:37:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:08:53.038 03:37:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:53.038 03:37:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:53.038 03:37:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:53.038 00:08:53.038 real 0m0.188s 00:08:53.038 user 0m0.111s 00:08:53.038 sys 0m0.075s 00:08:53.038 03:37:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:53.038 03:37:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:08:53.038 ************************************ 00:08:53.038 END TEST skip_rpc_with_delay 00:08:53.038 ************************************ 00:08:53.038 03:37:07 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:53.038 03:37:07 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:08:53.038 03:37:07 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:53.038 03:37:07 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:53.038 03:37:07 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:53.038 03:37:07 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:53.038 03:37:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.038 ************************************ 00:08:53.038 START TEST exit_on_failed_rpc_init 00:08:53.038 ************************************ 00:08:53.038 03:37:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:08:53.038 03:37:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=63020 00:08:53.038 03:37:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:53.038 03:37:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 63020 00:08:53.038 03:37:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 63020 ']' 00:08:53.038 03:37:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.038 03:37:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:53.038 03:37:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.038 03:37:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:53.038 03:37:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:53.038 [2024-07-26 03:37:07.936740] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:08:53.038 [2024-07-26 03:37:07.936984] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63020 ] 00:08:53.297 [2024-07-26 03:37:08.130101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.556 [2024-07-26 03:37:08.389934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.492 03:37:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:54.492 03:37:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:08:54.492 03:37:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:54.492 03:37:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:54.492 03:37:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:08:54.493 03:37:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:54.493 03:37:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:54.493 03:37:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:54.493 03:37:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:54.493 03:37:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:54.493 03:37:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:54.493 03:37:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:54.493 03:37:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:54.493 03:37:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:54.493 03:37:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:54.493 [2024-07-26 03:37:09.250419] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:08:54.493 [2024-07-26 03:37:09.250647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63038 ] 00:08:54.751 [2024-07-26 03:37:09.442944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.009 [2024-07-26 03:37:09.717095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.009 [2024-07-26 03:37:09.717230] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:55.009 [2024-07-26 03:37:09.717260] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:55.009 [2024-07-26 03:37:09.717283] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:55.268 03:37:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:08:55.268 03:37:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:55.268 03:37:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:08:55.268 03:37:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:08:55.268 03:37:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:08:55.268 03:37:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:55.268 03:37:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:55.268 03:37:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 63020 00:08:55.268 03:37:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 63020 ']' 00:08:55.268 03:37:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 63020 00:08:55.268 03:37:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:08:55.527 03:37:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:55.527 03:37:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63020 00:08:55.527 03:37:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:55.527 killing process with pid 63020 00:08:55.527 03:37:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:55.527 03:37:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63020' 00:08:55.527 03:37:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 63020 00:08:55.527 03:37:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 63020 00:08:58.059 00:08:58.059 real 0m4.559s 00:08:58.059 user 0m5.383s 00:08:58.059 sys 0m0.588s 00:08:58.059 03:37:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:58.059 03:37:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:58.060 ************************************ 00:08:58.060 END TEST exit_on_failed_rpc_init 00:08:58.060 ************************************ 00:08:58.060 03:37:12 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:58.060 03:37:12 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:58.060 00:08:58.060 real 0m22.979s 00:08:58.060 user 0m22.829s 00:08:58.060 sys 0m1.870s 00:08:58.060 03:37:12 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:58.060 03:37:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.060 ************************************ 00:08:58.060 END TEST skip_rpc 00:08:58.060 ************************************ 00:08:58.060 03:37:12 -- common/autotest_common.sh@1142 -- # return 0 00:08:58.060 03:37:12 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:58.060 03:37:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:58.060 03:37:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.060 03:37:12 -- common/autotest_common.sh@10 -- # set +x 00:08:58.060 ************************************ 00:08:58.060 START TEST rpc_client 00:08:58.060 ************************************ 00:08:58.060 03:37:12 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:58.060 * Looking for test storage... 00:08:58.060 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:08:58.060 03:37:12 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:08:58.060 OK 00:08:58.060 03:37:12 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:58.060 ************************************ 00:08:58.060 END TEST rpc_client 00:08:58.060 ************************************ 00:08:58.060 00:08:58.060 real 0m0.128s 00:08:58.060 user 0m0.054s 00:08:58.060 sys 0m0.080s 00:08:58.060 03:37:12 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:58.060 03:37:12 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:08:58.060 03:37:12 -- common/autotest_common.sh@1142 -- # return 0 00:08:58.060 03:37:12 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:58.060 03:37:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:58.060 03:37:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.060 03:37:12 -- common/autotest_common.sh@10 -- # set +x 00:08:58.060 ************************************ 00:08:58.060 START TEST json_config 00:08:58.060 ************************************ 00:08:58.060 03:37:12 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:58.060 03:37:12 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:58.060 03:37:12 json_config -- nvmf/common.sh@7 -- # uname -s 00:08:58.060 03:37:12 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.060 03:37:12 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.060 03:37:12 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.060 03:37:12 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.060 03:37:12 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.060 03:37:12 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.060 03:37:12 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.060 03:37:12 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.060 03:37:12 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.060 03:37:12 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.060 03:37:12 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6a2b635f-f6bf-4f3f-b455-5414d1e9f6aa 00:08:58.060 03:37:12 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=6a2b635f-f6bf-4f3f-b455-5414d1e9f6aa 00:08:58.060 03:37:12 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.060 03:37:12 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.060 03:37:12 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:58.060 03:37:12 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.060 03:37:12 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:58.060 03:37:12 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.060 03:37:12 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.060 03:37:12 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.060 03:37:12 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.060 03:37:12 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.060 03:37:12 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.060 03:37:12 json_config -- paths/export.sh@5 -- # export PATH 00:08:58.060 03:37:12 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.060 03:37:12 json_config -- nvmf/common.sh@47 -- # : 0 00:08:58.060 03:37:12 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:58.060 03:37:12 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:58.060 03:37:12 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.060 03:37:12 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.060 03:37:12 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.060 03:37:12 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:58.060 03:37:12 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:58.060 03:37:12 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:58.060 03:37:12 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:58.060 03:37:12 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:58.060 03:37:12 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:58.060 03:37:12 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:58.060 03:37:12 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:58.060 WARNING: No tests are enabled so not running JSON configuration tests 00:08:58.060 03:37:12 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:08:58.060 03:37:12 json_config -- json_config/json_config.sh@28 -- # exit 0 00:08:58.060 ************************************ 00:08:58.060 END TEST json_config 00:08:58.060 ************************************ 00:08:58.060 00:08:58.060 real 0m0.072s 00:08:58.060 user 0m0.037s 00:08:58.060 sys 0m0.035s 00:08:58.060 03:37:12 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:58.060 03:37:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:58.060 03:37:12 -- common/autotest_common.sh@1142 -- # return 0 00:08:58.060 03:37:12 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:58.060 03:37:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:58.060 03:37:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.060 03:37:12 -- common/autotest_common.sh@10 -- # set +x 00:08:58.060 ************************************ 00:08:58.060 START TEST json_config_extra_key 00:08:58.060 ************************************ 00:08:58.060 03:37:12 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:58.060 03:37:12 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:58.060 03:37:12 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:58.060 03:37:12 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.060 03:37:12 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.060 03:37:12 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.060 03:37:12 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.060 03:37:12 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.060 03:37:12 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.060 03:37:12 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.060 03:37:12 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.060 03:37:12 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.060 03:37:12 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.060 03:37:12 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6a2b635f-f6bf-4f3f-b455-5414d1e9f6aa 00:08:58.060 03:37:12 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=6a2b635f-f6bf-4f3f-b455-5414d1e9f6aa 00:08:58.060 03:37:12 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.060 03:37:12 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.060 03:37:12 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:58.060 03:37:12 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.060 03:37:12 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:58.060 03:37:12 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.060 03:37:12 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.060 03:37:12 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.061 03:37:12 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.061 03:37:12 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.061 03:37:12 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.061 03:37:12 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:58.061 03:37:12 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.061 03:37:12 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:08:58.061 03:37:12 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:58.061 03:37:12 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:58.061 03:37:12 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.061 03:37:12 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.061 03:37:12 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.061 03:37:12 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:58.061 03:37:12 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:58.061 03:37:12 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:58.061 03:37:12 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:58.061 03:37:12 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:58.061 03:37:12 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:58.061 03:37:12 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:58.061 03:37:12 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:58.061 03:37:12 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:58.061 03:37:12 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:58.061 03:37:12 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:08:58.061 03:37:12 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:58.061 03:37:12 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:58.061 INFO: launching applications... 00:08:58.061 03:37:12 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:58.061 03:37:12 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:58.061 03:37:12 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:58.061 03:37:12 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:58.061 03:37:12 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:58.061 03:37:12 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:58.061 03:37:12 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:58.061 03:37:12 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:58.061 03:37:12 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:58.061 03:37:12 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=63225 00:08:58.061 Waiting for target to run... 00:08:58.061 03:37:12 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:58.061 03:37:12 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:58.061 03:37:12 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 63225 /var/tmp/spdk_tgt.sock 00:08:58.061 03:37:12 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 63225 ']' 00:08:58.061 03:37:12 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:58.061 03:37:12 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:58.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:58.061 03:37:12 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:58.061 03:37:12 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:58.061 03:37:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:58.061 [2024-07-26 03:37:12.886754] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:08:58.061 [2024-07-26 03:37:12.887489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63225 ] 00:08:58.626 [2024-07-26 03:37:13.226775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.626 [2024-07-26 03:37:13.423858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.559 03:37:14 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:59.559 00:08:59.559 03:37:14 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:08:59.559 03:37:14 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:59.559 INFO: shutting down applications... 00:08:59.559 03:37:14 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:59.559 03:37:14 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:59.559 03:37:14 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:59.559 03:37:14 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:59.560 03:37:14 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 63225 ]] 00:08:59.560 03:37:14 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 63225 00:08:59.560 03:37:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:59.560 03:37:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:59.560 03:37:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63225 00:08:59.560 03:37:14 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:59.818 03:37:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:59.818 03:37:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:59.818 03:37:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63225 00:08:59.818 03:37:14 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:00.384 03:37:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:00.384 03:37:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:00.384 03:37:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63225 00:09:00.384 03:37:15 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:00.950 03:37:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:00.950 03:37:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:00.950 03:37:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63225 00:09:00.950 03:37:15 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:01.517 03:37:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:01.517 03:37:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:01.517 03:37:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63225 00:09:01.517 03:37:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:01.775 03:37:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:01.775 03:37:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:01.775 03:37:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63225 00:09:01.775 03:37:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:02.344 03:37:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:02.344 03:37:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:02.344 03:37:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63225 00:09:02.344 03:37:17 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:02.344 03:37:17 json_config_extra_key -- json_config/common.sh@43 -- # break 00:09:02.345 SPDK target shutdown done 00:09:02.345 03:37:17 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:02.345 03:37:17 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:02.345 Success 00:09:02.345 03:37:17 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:09:02.345 ************************************ 00:09:02.345 END TEST json_config_extra_key 00:09:02.345 ************************************ 00:09:02.345 00:09:02.345 real 0m4.426s 00:09:02.345 user 0m3.978s 00:09:02.345 sys 0m0.442s 00:09:02.345 03:37:17 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:02.345 03:37:17 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:02.345 03:37:17 -- common/autotest_common.sh@1142 -- # return 0 00:09:02.345 03:37:17 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:02.345 03:37:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:02.345 03:37:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:02.345 03:37:17 -- common/autotest_common.sh@10 -- # set +x 00:09:02.345 ************************************ 00:09:02.345 START TEST alias_rpc 00:09:02.345 ************************************ 00:09:02.345 03:37:17 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:02.608 * Looking for test storage... 00:09:02.608 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:09:02.608 03:37:17 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:02.608 03:37:17 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=63324 00:09:02.608 03:37:17 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:02.608 03:37:17 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 63324 00:09:02.608 03:37:17 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 63324 ']' 00:09:02.608 03:37:17 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.608 03:37:17 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:02.608 03:37:17 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.608 03:37:17 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:02.608 03:37:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.608 [2024-07-26 03:37:17.361526] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:09:02.608 [2024-07-26 03:37:17.361715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63324 ] 00:09:02.866 [2024-07-26 03:37:17.526520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.124 [2024-07-26 03:37:17.773085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.716 03:37:18 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:03.716 03:37:18 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:03.716 03:37:18 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:09:03.976 03:37:18 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 63324 00:09:03.976 03:37:18 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 63324 ']' 00:09:03.976 03:37:18 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 63324 00:09:03.976 03:37:18 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:09:03.976 03:37:18 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:03.976 03:37:18 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63324 00:09:04.235 03:37:18 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:04.235 killing process with pid 63324 00:09:04.235 03:37:18 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:04.235 03:37:18 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63324' 00:09:04.235 03:37:18 alias_rpc -- common/autotest_common.sh@967 -- # kill 63324 00:09:04.235 03:37:18 alias_rpc -- common/autotest_common.sh@972 -- # wait 63324 00:09:06.135 00:09:06.135 real 0m3.838s 00:09:06.135 user 0m4.177s 00:09:06.135 sys 0m0.454s 00:09:06.135 03:37:21 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:06.135 03:37:21 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.135 ************************************ 00:09:06.135 END TEST alias_rpc 00:09:06.135 ************************************ 00:09:06.394 03:37:21 -- common/autotest_common.sh@1142 -- # return 0 00:09:06.394 03:37:21 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:09:06.394 03:37:21 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:06.394 03:37:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:06.394 03:37:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.394 03:37:21 -- common/autotest_common.sh@10 -- # set +x 00:09:06.394 ************************************ 00:09:06.394 START TEST spdkcli_tcp 00:09:06.394 ************************************ 00:09:06.394 03:37:21 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:06.394 * Looking for test storage... 00:09:06.394 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:09:06.394 03:37:21 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:09:06.394 03:37:21 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:09:06.394 03:37:21 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:09:06.394 03:37:21 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:06.394 03:37:21 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:06.394 03:37:21 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:06.394 03:37:21 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:06.394 03:37:21 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:06.394 03:37:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:06.394 03:37:21 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=63418 00:09:06.394 03:37:21 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 63418 00:09:06.394 03:37:21 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:06.394 03:37:21 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 63418 ']' 00:09:06.394 03:37:21 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.394 03:37:21 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:06.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.394 03:37:21 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.394 03:37:21 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:06.394 03:37:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:06.394 [2024-07-26 03:37:21.267411] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:09:06.394 [2024-07-26 03:37:21.267613] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63418 ] 00:09:06.653 [2024-07-26 03:37:21.441421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:06.910 [2024-07-26 03:37:21.644206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.910 [2024-07-26 03:37:21.644208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.849 03:37:22 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:07.849 03:37:22 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:09:07.849 03:37:22 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=63445 00:09:07.849 03:37:22 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:07.849 03:37:22 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:08.119 [ 00:09:08.119 "bdev_malloc_delete", 00:09:08.119 "bdev_malloc_create", 00:09:08.119 "bdev_null_resize", 00:09:08.119 "bdev_null_delete", 00:09:08.119 "bdev_null_create", 00:09:08.119 "bdev_nvme_cuse_unregister", 00:09:08.119 "bdev_nvme_cuse_register", 00:09:08.119 "bdev_opal_new_user", 00:09:08.119 "bdev_opal_set_lock_state", 00:09:08.119 "bdev_opal_delete", 00:09:08.119 "bdev_opal_get_info", 00:09:08.119 "bdev_opal_create", 00:09:08.119 "bdev_nvme_opal_revert", 00:09:08.119 "bdev_nvme_opal_init", 00:09:08.119 "bdev_nvme_send_cmd", 00:09:08.119 "bdev_nvme_get_path_iostat", 00:09:08.119 "bdev_nvme_get_mdns_discovery_info", 00:09:08.119 "bdev_nvme_stop_mdns_discovery", 00:09:08.119 "bdev_nvme_start_mdns_discovery", 00:09:08.119 "bdev_nvme_set_multipath_policy", 00:09:08.119 "bdev_nvme_set_preferred_path", 00:09:08.120 "bdev_nvme_get_io_paths", 00:09:08.120 "bdev_nvme_remove_error_injection", 00:09:08.120 "bdev_nvme_add_error_injection", 00:09:08.120 "bdev_nvme_get_discovery_info", 00:09:08.120 "bdev_nvme_stop_discovery", 00:09:08.120 "bdev_nvme_start_discovery", 00:09:08.120 "bdev_nvme_get_controller_health_info", 00:09:08.120 "bdev_nvme_disable_controller", 00:09:08.120 "bdev_nvme_enable_controller", 00:09:08.120 "bdev_nvme_reset_controller", 00:09:08.120 "bdev_nvme_get_transport_statistics", 00:09:08.120 "bdev_nvme_apply_firmware", 00:09:08.120 "bdev_nvme_detach_controller", 00:09:08.120 "bdev_nvme_get_controllers", 00:09:08.120 "bdev_nvme_attach_controller", 00:09:08.120 "bdev_nvme_set_hotplug", 00:09:08.120 "bdev_nvme_set_options", 00:09:08.120 "bdev_passthru_delete", 00:09:08.120 "bdev_passthru_create", 00:09:08.120 "bdev_lvol_set_parent_bdev", 00:09:08.120 "bdev_lvol_set_parent", 00:09:08.120 "bdev_lvol_check_shallow_copy", 00:09:08.120 "bdev_lvol_start_shallow_copy", 00:09:08.120 "bdev_lvol_grow_lvstore", 00:09:08.120 "bdev_lvol_get_lvols", 00:09:08.120 "bdev_lvol_get_lvstores", 00:09:08.120 "bdev_lvol_delete", 00:09:08.120 "bdev_lvol_set_read_only", 00:09:08.120 "bdev_lvol_resize", 00:09:08.120 "bdev_lvol_decouple_parent", 00:09:08.120 "bdev_lvol_inflate", 00:09:08.120 "bdev_lvol_rename", 00:09:08.120 "bdev_lvol_clone_bdev", 00:09:08.120 "bdev_lvol_clone", 00:09:08.120 "bdev_lvol_snapshot", 00:09:08.120 "bdev_lvol_create", 00:09:08.120 "bdev_lvol_delete_lvstore", 00:09:08.120 "bdev_lvol_rename_lvstore", 00:09:08.120 "bdev_lvol_create_lvstore", 00:09:08.120 "bdev_raid_set_options", 00:09:08.120 "bdev_raid_remove_base_bdev", 00:09:08.120 "bdev_raid_add_base_bdev", 00:09:08.120 "bdev_raid_delete", 00:09:08.120 "bdev_raid_create", 00:09:08.120 "bdev_raid_get_bdevs", 00:09:08.120 "bdev_error_inject_error", 00:09:08.120 "bdev_error_delete", 00:09:08.120 "bdev_error_create", 00:09:08.120 "bdev_split_delete", 00:09:08.120 "bdev_split_create", 00:09:08.120 "bdev_delay_delete", 00:09:08.120 "bdev_delay_create", 00:09:08.120 "bdev_delay_update_latency", 00:09:08.120 "bdev_zone_block_delete", 00:09:08.120 "bdev_zone_block_create", 00:09:08.120 "blobfs_create", 00:09:08.120 "blobfs_detect", 00:09:08.120 "blobfs_set_cache_size", 00:09:08.120 "bdev_xnvme_delete", 00:09:08.120 "bdev_xnvme_create", 00:09:08.120 "bdev_aio_delete", 00:09:08.120 "bdev_aio_rescan", 00:09:08.120 "bdev_aio_create", 00:09:08.120 "bdev_ftl_set_property", 00:09:08.120 "bdev_ftl_get_properties", 00:09:08.120 "bdev_ftl_get_stats", 00:09:08.120 "bdev_ftl_unmap", 00:09:08.120 "bdev_ftl_unload", 00:09:08.120 "bdev_ftl_delete", 00:09:08.120 "bdev_ftl_load", 00:09:08.120 "bdev_ftl_create", 00:09:08.120 "bdev_virtio_attach_controller", 00:09:08.120 "bdev_virtio_scsi_get_devices", 00:09:08.120 "bdev_virtio_detach_controller", 00:09:08.120 "bdev_virtio_blk_set_hotplug", 00:09:08.120 "bdev_iscsi_delete", 00:09:08.120 "bdev_iscsi_create", 00:09:08.120 "bdev_iscsi_set_options", 00:09:08.120 "accel_error_inject_error", 00:09:08.120 "ioat_scan_accel_module", 00:09:08.120 "dsa_scan_accel_module", 00:09:08.120 "iaa_scan_accel_module", 00:09:08.120 "keyring_file_remove_key", 00:09:08.120 "keyring_file_add_key", 00:09:08.120 "keyring_linux_set_options", 00:09:08.120 "iscsi_get_histogram", 00:09:08.120 "iscsi_enable_histogram", 00:09:08.120 "iscsi_set_options", 00:09:08.120 "iscsi_get_auth_groups", 00:09:08.120 "iscsi_auth_group_remove_secret", 00:09:08.120 "iscsi_auth_group_add_secret", 00:09:08.120 "iscsi_delete_auth_group", 00:09:08.120 "iscsi_create_auth_group", 00:09:08.120 "iscsi_set_discovery_auth", 00:09:08.120 "iscsi_get_options", 00:09:08.120 "iscsi_target_node_request_logout", 00:09:08.120 "iscsi_target_node_set_redirect", 00:09:08.120 "iscsi_target_node_set_auth", 00:09:08.120 "iscsi_target_node_add_lun", 00:09:08.120 "iscsi_get_stats", 00:09:08.120 "iscsi_get_connections", 00:09:08.120 "iscsi_portal_group_set_auth", 00:09:08.120 "iscsi_start_portal_group", 00:09:08.120 "iscsi_delete_portal_group", 00:09:08.120 "iscsi_create_portal_group", 00:09:08.120 "iscsi_get_portal_groups", 00:09:08.120 "iscsi_delete_target_node", 00:09:08.120 "iscsi_target_node_remove_pg_ig_maps", 00:09:08.120 "iscsi_target_node_add_pg_ig_maps", 00:09:08.120 "iscsi_create_target_node", 00:09:08.120 "iscsi_get_target_nodes", 00:09:08.120 "iscsi_delete_initiator_group", 00:09:08.120 "iscsi_initiator_group_remove_initiators", 00:09:08.120 "iscsi_initiator_group_add_initiators", 00:09:08.120 "iscsi_create_initiator_group", 00:09:08.120 "iscsi_get_initiator_groups", 00:09:08.120 "nvmf_set_crdt", 00:09:08.120 "nvmf_set_config", 00:09:08.120 "nvmf_set_max_subsystems", 00:09:08.120 "nvmf_stop_mdns_prr", 00:09:08.120 "nvmf_publish_mdns_prr", 00:09:08.120 "nvmf_subsystem_get_listeners", 00:09:08.120 "nvmf_subsystem_get_qpairs", 00:09:08.120 "nvmf_subsystem_get_controllers", 00:09:08.120 "nvmf_get_stats", 00:09:08.120 "nvmf_get_transports", 00:09:08.120 "nvmf_create_transport", 00:09:08.120 "nvmf_get_targets", 00:09:08.120 "nvmf_delete_target", 00:09:08.120 "nvmf_create_target", 00:09:08.120 "nvmf_subsystem_allow_any_host", 00:09:08.120 "nvmf_subsystem_remove_host", 00:09:08.120 "nvmf_subsystem_add_host", 00:09:08.120 "nvmf_ns_remove_host", 00:09:08.120 "nvmf_ns_add_host", 00:09:08.120 "nvmf_subsystem_remove_ns", 00:09:08.120 "nvmf_subsystem_add_ns", 00:09:08.120 "nvmf_subsystem_listener_set_ana_state", 00:09:08.120 "nvmf_discovery_get_referrals", 00:09:08.120 "nvmf_discovery_remove_referral", 00:09:08.120 "nvmf_discovery_add_referral", 00:09:08.120 "nvmf_subsystem_remove_listener", 00:09:08.120 "nvmf_subsystem_add_listener", 00:09:08.120 "nvmf_delete_subsystem", 00:09:08.120 "nvmf_create_subsystem", 00:09:08.120 "nvmf_get_subsystems", 00:09:08.120 "env_dpdk_get_mem_stats", 00:09:08.120 "nbd_get_disks", 00:09:08.120 "nbd_stop_disk", 00:09:08.120 "nbd_start_disk", 00:09:08.120 "ublk_recover_disk", 00:09:08.120 "ublk_get_disks", 00:09:08.120 "ublk_stop_disk", 00:09:08.120 "ublk_start_disk", 00:09:08.120 "ublk_destroy_target", 00:09:08.120 "ublk_create_target", 00:09:08.120 "virtio_blk_create_transport", 00:09:08.120 "virtio_blk_get_transports", 00:09:08.120 "vhost_controller_set_coalescing", 00:09:08.120 "vhost_get_controllers", 00:09:08.120 "vhost_delete_controller", 00:09:08.120 "vhost_create_blk_controller", 00:09:08.120 "vhost_scsi_controller_remove_target", 00:09:08.120 "vhost_scsi_controller_add_target", 00:09:08.120 "vhost_start_scsi_controller", 00:09:08.120 "vhost_create_scsi_controller", 00:09:08.120 "thread_set_cpumask", 00:09:08.120 "framework_get_governor", 00:09:08.120 "framework_get_scheduler", 00:09:08.120 "framework_set_scheduler", 00:09:08.120 "framework_get_reactors", 00:09:08.120 "thread_get_io_channels", 00:09:08.120 "thread_get_pollers", 00:09:08.120 "thread_get_stats", 00:09:08.120 "framework_monitor_context_switch", 00:09:08.120 "spdk_kill_instance", 00:09:08.120 "log_enable_timestamps", 00:09:08.120 "log_get_flags", 00:09:08.120 "log_clear_flag", 00:09:08.120 "log_set_flag", 00:09:08.120 "log_get_level", 00:09:08.120 "log_set_level", 00:09:08.120 "log_get_print_level", 00:09:08.120 "log_set_print_level", 00:09:08.120 "framework_enable_cpumask_locks", 00:09:08.120 "framework_disable_cpumask_locks", 00:09:08.120 "framework_wait_init", 00:09:08.120 "framework_start_init", 00:09:08.120 "scsi_get_devices", 00:09:08.120 "bdev_get_histogram", 00:09:08.120 "bdev_enable_histogram", 00:09:08.120 "bdev_set_qos_limit", 00:09:08.120 "bdev_set_qd_sampling_period", 00:09:08.120 "bdev_get_bdevs", 00:09:08.120 "bdev_reset_iostat", 00:09:08.120 "bdev_get_iostat", 00:09:08.120 "bdev_examine", 00:09:08.120 "bdev_wait_for_examine", 00:09:08.120 "bdev_set_options", 00:09:08.120 "notify_get_notifications", 00:09:08.120 "notify_get_types", 00:09:08.120 "accel_get_stats", 00:09:08.120 "accel_set_options", 00:09:08.120 "accel_set_driver", 00:09:08.120 "accel_crypto_key_destroy", 00:09:08.120 "accel_crypto_keys_get", 00:09:08.120 "accel_crypto_key_create", 00:09:08.120 "accel_assign_opc", 00:09:08.120 "accel_get_module_info", 00:09:08.120 "accel_get_opc_assignments", 00:09:08.120 "vmd_rescan", 00:09:08.120 "vmd_remove_device", 00:09:08.120 "vmd_enable", 00:09:08.120 "sock_get_default_impl", 00:09:08.120 "sock_set_default_impl", 00:09:08.120 "sock_impl_set_options", 00:09:08.120 "sock_impl_get_options", 00:09:08.120 "iobuf_get_stats", 00:09:08.120 "iobuf_set_options", 00:09:08.120 "framework_get_pci_devices", 00:09:08.120 "framework_get_config", 00:09:08.120 "framework_get_subsystems", 00:09:08.120 "trace_get_info", 00:09:08.120 "trace_get_tpoint_group_mask", 00:09:08.120 "trace_disable_tpoint_group", 00:09:08.120 "trace_enable_tpoint_group", 00:09:08.120 "trace_clear_tpoint_mask", 00:09:08.120 "trace_set_tpoint_mask", 00:09:08.121 "keyring_get_keys", 00:09:08.121 "spdk_get_version", 00:09:08.121 "rpc_get_methods" 00:09:08.121 ] 00:09:08.121 03:37:22 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:08.121 03:37:22 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:08.121 03:37:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:08.121 03:37:22 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:08.121 03:37:22 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 63418 00:09:08.121 03:37:22 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 63418 ']' 00:09:08.121 03:37:22 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 63418 00:09:08.121 03:37:22 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:09:08.121 03:37:22 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:08.121 03:37:22 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63418 00:09:08.121 killing process with pid 63418 00:09:08.121 03:37:22 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:08.121 03:37:22 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:08.121 03:37:22 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63418' 00:09:08.121 03:37:22 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 63418 00:09:08.121 03:37:22 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 63418 00:09:10.651 ************************************ 00:09:10.651 END TEST spdkcli_tcp 00:09:10.651 ************************************ 00:09:10.651 00:09:10.651 real 0m3.986s 00:09:10.651 user 0m7.364s 00:09:10.651 sys 0m0.526s 00:09:10.651 03:37:25 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:10.651 03:37:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:10.651 03:37:25 -- common/autotest_common.sh@1142 -- # return 0 00:09:10.651 03:37:25 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:10.651 03:37:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:10.651 03:37:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:10.651 03:37:25 -- common/autotest_common.sh@10 -- # set +x 00:09:10.651 ************************************ 00:09:10.651 START TEST dpdk_mem_utility 00:09:10.651 ************************************ 00:09:10.651 03:37:25 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:10.651 * Looking for test storage... 00:09:10.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:09:10.651 03:37:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:10.651 03:37:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=63533 00:09:10.651 03:37:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 63533 00:09:10.651 03:37:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:10.651 03:37:25 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 63533 ']' 00:09:10.651 03:37:25 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.651 03:37:25 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:10.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.651 03:37:25 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.651 03:37:25 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:10.651 03:37:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:10.651 [2024-07-26 03:37:25.309887] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:09:10.651 [2024-07-26 03:37:25.310131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63533 ] 00:09:10.651 [2024-07-26 03:37:25.495487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.909 [2024-07-26 03:37:25.689892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.846 03:37:26 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:11.846 03:37:26 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:09:11.846 03:37:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:11.846 03:37:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:11.846 03:37:26 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.846 03:37:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:11.846 { 00:09:11.846 "filename": "/tmp/spdk_mem_dump.txt" 00:09:11.846 } 00:09:11.846 03:37:26 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.846 03:37:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:11.846 DPDK memory size 820.000000 MiB in 1 heap(s) 00:09:11.846 1 heaps totaling size 820.000000 MiB 00:09:11.846 size: 820.000000 MiB heap id: 0 00:09:11.846 end heaps---------- 00:09:11.846 8 mempools totaling size 598.116089 MiB 00:09:11.846 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:11.846 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:11.846 size: 84.521057 MiB name: bdev_io_63533 00:09:11.846 size: 51.011292 MiB name: evtpool_63533 00:09:11.846 size: 50.003479 MiB name: msgpool_63533 00:09:11.846 size: 21.763794 MiB name: PDU_Pool 00:09:11.846 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:11.846 size: 0.026123 MiB name: Session_Pool 00:09:11.846 end mempools------- 00:09:11.846 6 memzones totaling size 4.142822 MiB 00:09:11.846 size: 1.000366 MiB name: RG_ring_0_63533 00:09:11.846 size: 1.000366 MiB name: RG_ring_1_63533 00:09:11.846 size: 1.000366 MiB name: RG_ring_4_63533 00:09:11.846 size: 1.000366 MiB name: RG_ring_5_63533 00:09:11.846 size: 0.125366 MiB name: RG_ring_2_63533 00:09:11.846 size: 0.015991 MiB name: RG_ring_3_63533 00:09:11.846 end memzones------- 00:09:11.846 03:37:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:09:11.846 heap id: 0 total size: 820.000000 MiB number of busy elements: 300 number of free elements: 18 00:09:11.846 list of free elements. size: 18.451538 MiB 00:09:11.846 element at address: 0x200000400000 with size: 1.999451 MiB 00:09:11.846 element at address: 0x200000800000 with size: 1.996887 MiB 00:09:11.846 element at address: 0x200007000000 with size: 1.995972 MiB 00:09:11.846 element at address: 0x20000b200000 with size: 1.995972 MiB 00:09:11.846 element at address: 0x200019100040 with size: 0.999939 MiB 00:09:11.846 element at address: 0x200019500040 with size: 0.999939 MiB 00:09:11.846 element at address: 0x200019600000 with size: 0.999084 MiB 00:09:11.846 element at address: 0x200003e00000 with size: 0.996094 MiB 00:09:11.846 element at address: 0x200032200000 with size: 0.994324 MiB 00:09:11.846 element at address: 0x200018e00000 with size: 0.959656 MiB 00:09:11.846 element at address: 0x200019900040 with size: 0.936401 MiB 00:09:11.846 element at address: 0x200000200000 with size: 0.829956 MiB 00:09:11.846 element at address: 0x20001b000000 with size: 0.564148 MiB 00:09:11.846 element at address: 0x200019200000 with size: 0.487976 MiB 00:09:11.846 element at address: 0x200019a00000 with size: 0.485413 MiB 00:09:11.846 element at address: 0x200013800000 with size: 0.467896 MiB 00:09:11.846 element at address: 0x200028400000 with size: 0.390442 MiB 00:09:11.846 element at address: 0x200003a00000 with size: 0.351990 MiB 00:09:11.846 list of standard malloc elements. size: 199.284058 MiB 00:09:11.846 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:09:11.846 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:09:11.846 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:09:11.846 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:09:11.846 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:09:11.846 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:09:11.846 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:09:11.846 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:09:11.846 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:09:11.846 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:09:11.846 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:09:11.846 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:09:11.846 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:09:11.846 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:09:11.846 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:09:11.846 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:09:11.846 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:09:11.846 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:09:11.846 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:09:11.846 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:09:11.846 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:09:11.846 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:09:11.846 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:09:11.846 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:09:11.846 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:09:11.846 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:09:11.846 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x200003aff980 with size: 0.000244 MiB 00:09:11.847 element at address: 0x200003affa80 with size: 0.000244 MiB 00:09:11.847 element at address: 0x200003eff000 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:09:11.847 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:09:11.847 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:09:11.847 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:09:11.847 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:09:11.847 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:09:11.847 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:09:11.847 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:09:11.847 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:09:11.847 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:09:11.847 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:09:11.847 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:09:11.847 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:09:11.847 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:09:11.847 element at address: 0x200013877c80 with size: 0.000244 MiB 00:09:11.847 element at address: 0x200013877d80 with size: 0.000244 MiB 00:09:11.847 element at address: 0x200013877e80 with size: 0.000244 MiB 00:09:11.847 element at address: 0x200013877f80 with size: 0.000244 MiB 00:09:11.847 element at address: 0x200013878080 with size: 0.000244 MiB 00:09:11.847 element at address: 0x200013878180 with size: 0.000244 MiB 00:09:11.847 element at address: 0x200013878280 with size: 0.000244 MiB 00:09:11.847 element at address: 0x200013878380 with size: 0.000244 MiB 00:09:11.847 element at address: 0x200013878480 with size: 0.000244 MiB 00:09:11.847 element at address: 0x200013878580 with size: 0.000244 MiB 00:09:11.847 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:09:11.847 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:09:11.847 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x200019abc680 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:09:11.847 element at address: 0x200028463f40 with size: 0.000244 MiB 00:09:11.847 element at address: 0x200028464040 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20002846af80 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20002846b080 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20002846b180 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20002846b280 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20002846b380 with size: 0.000244 MiB 00:09:11.847 element at address: 0x20002846b480 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846b580 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846b680 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846b780 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846b880 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846b980 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846be80 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846c080 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846c180 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846c280 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846c380 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846c480 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846c580 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846c680 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846c780 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846c880 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846c980 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846d080 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846d180 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846d280 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846d380 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846d480 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846d580 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846d680 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846d780 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846d880 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846d980 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846da80 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846db80 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846de80 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846df80 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846e080 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846e180 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846e280 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846e380 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846e480 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846e580 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846e680 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846e780 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846e880 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846e980 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846f080 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846f180 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846f280 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846f380 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846f480 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846f580 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846f680 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846f780 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846f880 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846f980 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:09:11.848 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:09:11.848 list of memzone associated elements. size: 602.264404 MiB 00:09:11.848 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:09:11.848 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:11.848 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:09:11.848 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:11.848 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:09:11.848 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_63533_0 00:09:11.848 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:09:11.848 associated memzone info: size: 48.002930 MiB name: MP_evtpool_63533_0 00:09:11.848 element at address: 0x200003fff340 with size: 48.003113 MiB 00:09:11.848 associated memzone info: size: 48.002930 MiB name: MP_msgpool_63533_0 00:09:11.848 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:09:11.848 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:11.848 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:09:11.848 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:11.848 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:09:11.848 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_63533 00:09:11.848 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:09:11.848 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_63533 00:09:11.848 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:09:11.848 associated memzone info: size: 1.007996 MiB name: MP_evtpool_63533 00:09:11.848 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:09:11.848 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:11.848 element at address: 0x200019abc780 with size: 1.008179 MiB 00:09:11.848 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:11.848 element at address: 0x200018efde00 with size: 1.008179 MiB 00:09:11.848 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:11.848 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:09:11.848 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:11.848 element at address: 0x200003eff100 with size: 1.000549 MiB 00:09:11.848 associated memzone info: size: 1.000366 MiB name: RG_ring_0_63533 00:09:11.848 element at address: 0x200003affb80 with size: 1.000549 MiB 00:09:11.848 associated memzone info: size: 1.000366 MiB name: RG_ring_1_63533 00:09:11.848 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:09:11.848 associated memzone info: size: 1.000366 MiB name: RG_ring_4_63533 00:09:11.848 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:09:11.848 associated memzone info: size: 1.000366 MiB name: RG_ring_5_63533 00:09:11.848 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:09:11.848 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_63533 00:09:11.848 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:09:11.848 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:11.848 element at address: 0x200013878680 with size: 0.500549 MiB 00:09:11.848 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:11.848 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:09:11.848 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:11.848 element at address: 0x200003adf740 with size: 0.125549 MiB 00:09:11.848 associated memzone info: size: 0.125366 MiB name: RG_ring_2_63533 00:09:11.848 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:09:11.848 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:11.848 element at address: 0x200028464140 with size: 0.023804 MiB 00:09:11.848 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:11.848 element at address: 0x200003adb500 with size: 0.016174 MiB 00:09:11.848 associated memzone info: size: 0.015991 MiB name: RG_ring_3_63533 00:09:11.848 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:09:11.848 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:11.848 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:09:11.848 associated memzone info: size: 0.000183 MiB name: MP_msgpool_63533 00:09:11.848 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:09:11.848 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_63533 00:09:11.848 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:09:11.848 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:11.848 03:37:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:11.848 03:37:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 63533 00:09:11.848 03:37:26 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 63533 ']' 00:09:11.848 03:37:26 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 63533 00:09:11.848 03:37:26 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:09:11.848 03:37:26 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:11.848 03:37:26 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63533 00:09:11.848 03:37:26 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:11.849 03:37:26 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:11.849 03:37:26 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63533' 00:09:11.849 killing process with pid 63533 00:09:11.849 03:37:26 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 63533 00:09:11.849 03:37:26 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 63533 00:09:14.399 00:09:14.399 real 0m3.729s 00:09:14.399 user 0m3.849s 00:09:14.399 sys 0m0.459s 00:09:14.399 03:37:28 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:14.399 03:37:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:14.399 ************************************ 00:09:14.399 END TEST dpdk_mem_utility 00:09:14.399 ************************************ 00:09:14.399 03:37:28 -- common/autotest_common.sh@1142 -- # return 0 00:09:14.399 03:37:28 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:14.399 03:37:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:14.399 03:37:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.399 03:37:28 -- common/autotest_common.sh@10 -- # set +x 00:09:14.399 ************************************ 00:09:14.399 START TEST event 00:09:14.399 ************************************ 00:09:14.399 03:37:28 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:14.399 * Looking for test storage... 00:09:14.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:14.400 03:37:28 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:14.400 03:37:28 event -- bdev/nbd_common.sh@6 -- # set -e 00:09:14.400 03:37:28 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:14.400 03:37:28 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:09:14.400 03:37:28 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.400 03:37:28 event -- common/autotest_common.sh@10 -- # set +x 00:09:14.400 ************************************ 00:09:14.400 START TEST event_perf 00:09:14.400 ************************************ 00:09:14.400 03:37:28 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:14.400 Running I/O for 1 seconds...[2024-07-26 03:37:29.006124] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:09:14.400 [2024-07-26 03:37:29.006327] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63633 ] 00:09:14.400 [2024-07-26 03:37:29.212772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:14.658 [2024-07-26 03:37:29.424336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.658 [2024-07-26 03:37:29.424491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:14.658 Running I/O for 1 seconds...[2024-07-26 03:37:29.424560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.658 [2024-07-26 03:37:29.424570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:16.029 00:09:16.029 lcore 0: 160102 00:09:16.029 lcore 1: 160103 00:09:16.029 lcore 2: 160104 00:09:16.029 lcore 3: 160105 00:09:16.029 done. 00:09:16.029 00:09:16.029 real 0m1.898s 00:09:16.029 user 0m4.644s 00:09:16.029 sys 0m0.118s 00:09:16.029 03:37:30 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:16.029 03:37:30 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:09:16.029 ************************************ 00:09:16.029 END TEST event_perf 00:09:16.029 ************************************ 00:09:16.029 03:37:30 event -- common/autotest_common.sh@1142 -- # return 0 00:09:16.029 03:37:30 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:16.029 03:37:30 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:16.029 03:37:30 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:16.029 03:37:30 event -- common/autotest_common.sh@10 -- # set +x 00:09:16.029 ************************************ 00:09:16.029 START TEST event_reactor 00:09:16.029 ************************************ 00:09:16.029 03:37:30 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:16.286 [2024-07-26 03:37:30.950497] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:09:16.286 [2024-07-26 03:37:30.950707] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63678 ] 00:09:16.286 [2024-07-26 03:37:31.133956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.544 [2024-07-26 03:37:31.383698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.916 test_start 00:09:17.916 oneshot 00:09:17.916 tick 100 00:09:17.916 tick 100 00:09:17.916 tick 250 00:09:17.916 tick 100 00:09:17.916 tick 100 00:09:17.916 tick 100 00:09:17.916 tick 250 00:09:17.916 tick 500 00:09:17.916 tick 100 00:09:17.916 tick 100 00:09:17.916 tick 250 00:09:17.916 tick 100 00:09:17.916 tick 100 00:09:17.916 test_end 00:09:17.916 00:09:17.916 real 0m1.889s 00:09:17.916 user 0m1.665s 00:09:17.916 sys 0m0.109s 00:09:17.916 03:37:32 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:17.916 03:37:32 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:09:17.916 ************************************ 00:09:17.916 END TEST event_reactor 00:09:17.916 ************************************ 00:09:18.174 03:37:32 event -- common/autotest_common.sh@1142 -- # return 0 00:09:18.174 03:37:32 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:18.174 03:37:32 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:18.174 03:37:32 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:18.174 03:37:32 event -- common/autotest_common.sh@10 -- # set +x 00:09:18.174 ************************************ 00:09:18.174 START TEST event_reactor_perf 00:09:18.174 ************************************ 00:09:18.174 03:37:32 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:18.174 [2024-07-26 03:37:32.886185] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:09:18.174 [2024-07-26 03:37:32.886439] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63720 ] 00:09:18.174 [2024-07-26 03:37:33.068632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.431 [2024-07-26 03:37:33.296998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.330 test_start 00:09:20.330 test_end 00:09:20.330 Performance: 253725 events per second 00:09:20.330 00:09:20.330 real 0m1.931s 00:09:20.330 user 0m1.692s 00:09:20.330 sys 0m0.122s 00:09:20.330 03:37:34 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:20.330 ************************************ 00:09:20.330 END TEST event_reactor_perf 00:09:20.330 ************************************ 00:09:20.330 03:37:34 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:09:20.330 03:37:34 event -- common/autotest_common.sh@1142 -- # return 0 00:09:20.330 03:37:34 event -- event/event.sh@49 -- # uname -s 00:09:20.330 03:37:34 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:20.330 03:37:34 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:20.330 03:37:34 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:20.330 03:37:34 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.330 03:37:34 event -- common/autotest_common.sh@10 -- # set +x 00:09:20.330 ************************************ 00:09:20.330 START TEST event_scheduler 00:09:20.330 ************************************ 00:09:20.330 03:37:34 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:20.330 * Looking for test storage... 00:09:20.330 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:09:20.330 03:37:34 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:20.330 03:37:34 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=63783 00:09:20.330 03:37:34 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:20.330 03:37:34 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:20.330 03:37:34 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 63783 00:09:20.330 03:37:34 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 63783 ']' 00:09:20.330 03:37:34 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.330 03:37:34 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:20.330 03:37:34 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.330 03:37:34 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:20.330 03:37:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:20.330 [2024-07-26 03:37:34.978515] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:09:20.330 [2024-07-26 03:37:34.978733] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63783 ] 00:09:20.330 [2024-07-26 03:37:35.145184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:20.589 [2024-07-26 03:37:35.384611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.589 [2024-07-26 03:37:35.384786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.589 [2024-07-26 03:37:35.384861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:20.589 [2024-07-26 03:37:35.385122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:21.215 03:37:36 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:21.215 03:37:36 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:09:21.215 03:37:36 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:21.215 03:37:36 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.215 03:37:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:21.215 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:21.215 POWER: Cannot set governor of lcore 0 to userspace 00:09:21.215 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:21.215 POWER: Cannot set governor of lcore 0 to performance 00:09:21.215 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:21.215 POWER: Cannot set governor of lcore 0 to userspace 00:09:21.215 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:21.215 POWER: Cannot set governor of lcore 0 to userspace 00:09:21.215 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:09:21.215 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:09:21.215 POWER: Unable to set Power Management Environment for lcore 0 00:09:21.215 [2024-07-26 03:37:36.055541] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:09:21.215 [2024-07-26 03:37:36.055581] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:09:21.215 [2024-07-26 03:37:36.055611] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:09:21.215 [2024-07-26 03:37:36.055652] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:21.215 [2024-07-26 03:37:36.055680] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:21.215 [2024-07-26 03:37:36.055702] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:21.215 03:37:36 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.215 03:37:36 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:21.215 03:37:36 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.215 03:37:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:21.782 [2024-07-26 03:37:36.453342] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:21.782 03:37:36 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.782 03:37:36 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:21.782 03:37:36 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:21.782 03:37:36 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.782 03:37:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:21.782 ************************************ 00:09:21.782 START TEST scheduler_create_thread 00:09:21.782 ************************************ 00:09:21.782 03:37:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:09:21.782 03:37:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:21.782 03:37:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.782 03:37:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:21.782 2 00:09:21.782 03:37:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.782 03:37:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:21.782 03:37:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.782 03:37:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:21.782 3 00:09:21.782 03:37:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.782 03:37:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:21.782 03:37:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.782 03:37:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:21.782 4 00:09:21.783 03:37:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.783 03:37:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:21.783 03:37:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.783 03:37:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:21.783 5 00:09:21.783 03:37:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.783 03:37:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:21.783 03:37:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.783 03:37:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:21.783 6 00:09:21.783 03:37:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.783 03:37:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:21.783 03:37:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.783 03:37:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:21.783 7 00:09:21.783 03:37:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.783 03:37:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:21.783 03:37:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.783 03:37:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:21.783 8 00:09:21.783 03:37:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.783 03:37:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:21.783 03:37:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.783 03:37:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:21.783 9 00:09:21.783 03:37:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.783 03:37:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:21.783 03:37:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.783 03:37:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:21.783 10 00:09:21.783 03:37:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.783 03:37:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:21.783 03:37:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.783 03:37:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:21.783 03:37:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.783 03:37:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:21.783 03:37:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:21.783 03:37:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.783 03:37:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:21.783 03:37:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.783 03:37:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:21.783 03:37:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.783 03:37:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:22.716 03:37:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.716 03:37:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:22.716 03:37:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:22.716 03:37:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.716 03:37:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:24.090 03:37:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.090 00:09:24.090 real 0m2.138s 00:09:24.090 user 0m0.015s 00:09:24.090 sys 0m0.006s 00:09:24.090 03:37:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:24.090 ************************************ 00:09:24.090 03:37:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:24.090 END TEST scheduler_create_thread 00:09:24.090 ************************************ 00:09:24.090 03:37:38 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:09:24.090 03:37:38 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:24.090 03:37:38 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 63783 00:09:24.090 03:37:38 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 63783 ']' 00:09:24.090 03:37:38 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 63783 00:09:24.090 03:37:38 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:09:24.090 03:37:38 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:24.090 03:37:38 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63783 00:09:24.090 03:37:38 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:09:24.090 03:37:38 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:09:24.090 03:37:38 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63783' 00:09:24.090 killing process with pid 63783 00:09:24.090 03:37:38 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 63783 00:09:24.090 03:37:38 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 63783 00:09:24.348 [2024-07-26 03:37:39.081756] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:25.749 ************************************ 00:09:25.749 END TEST event_scheduler 00:09:25.749 ************************************ 00:09:25.749 00:09:25.749 real 0m5.545s 00:09:25.749 user 0m9.572s 00:09:25.749 sys 0m0.402s 00:09:25.749 03:37:40 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:25.749 03:37:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:25.749 03:37:40 event -- common/autotest_common.sh@1142 -- # return 0 00:09:25.749 03:37:40 event -- event/event.sh@51 -- # modprobe -n nbd 00:09:25.749 03:37:40 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:25.749 03:37:40 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:25.749 03:37:40 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:25.749 03:37:40 event -- common/autotest_common.sh@10 -- # set +x 00:09:25.749 ************************************ 00:09:25.749 START TEST app_repeat 00:09:25.749 ************************************ 00:09:25.749 03:37:40 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:09:25.749 03:37:40 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:25.749 03:37:40 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:25.749 03:37:40 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:09:25.749 03:37:40 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:25.749 03:37:40 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:09:25.749 03:37:40 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:09:25.749 03:37:40 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:09:25.749 03:37:40 event.app_repeat -- event/event.sh@19 -- # repeat_pid=63894 00:09:25.749 03:37:40 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:25.749 03:37:40 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:25.749 Process app_repeat pid: 63894 00:09:25.749 03:37:40 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 63894' 00:09:25.749 03:37:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:25.749 spdk_app_start Round 0 00:09:25.749 03:37:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:25.749 03:37:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63894 /var/tmp/spdk-nbd.sock 00:09:25.749 03:37:40 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63894 ']' 00:09:25.749 03:37:40 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:25.749 03:37:40 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:25.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:25.749 03:37:40 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:25.749 03:37:40 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:25.749 03:37:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:25.749 [2024-07-26 03:37:40.479711] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:09:25.749 [2024-07-26 03:37:40.479969] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63894 ] 00:09:26.008 [2024-07-26 03:37:40.680153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:26.008 [2024-07-26 03:37:40.909161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.008 [2024-07-26 03:37:40.909174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.942 03:37:41 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:26.942 03:37:41 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:09:26.942 03:37:41 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:27.200 Malloc0 00:09:27.200 03:37:41 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:27.458 Malloc1 00:09:27.458 03:37:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:27.458 03:37:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:27.458 03:37:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:27.458 03:37:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:27.458 03:37:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:27.458 03:37:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:27.458 03:37:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:27.458 03:37:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:27.458 03:37:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:27.458 03:37:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:27.458 03:37:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:27.458 03:37:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:27.458 03:37:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:27.458 03:37:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:27.458 03:37:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:27.458 03:37:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:27.716 /dev/nbd0 00:09:27.716 03:37:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:27.716 03:37:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:27.716 03:37:42 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:27.716 03:37:42 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:09:27.716 03:37:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:27.716 03:37:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:27.716 03:37:42 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:27.716 03:37:42 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:09:27.716 03:37:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:27.716 03:37:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:27.716 03:37:42 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:27.716 1+0 records in 00:09:27.716 1+0 records out 00:09:27.716 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341656 s, 12.0 MB/s 00:09:27.716 03:37:42 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:27.716 03:37:42 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:09:27.716 03:37:42 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:27.716 03:37:42 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:27.716 03:37:42 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:09:27.716 03:37:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:27.716 03:37:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:27.716 03:37:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:27.974 /dev/nbd1 00:09:27.974 03:37:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:27.974 03:37:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:27.974 03:37:42 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:27.974 03:37:42 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:09:27.974 03:37:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:27.974 03:37:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:27.974 03:37:42 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:27.974 03:37:42 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:09:27.974 03:37:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:27.974 03:37:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:27.974 03:37:42 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:27.974 1+0 records in 00:09:27.974 1+0 records out 00:09:27.974 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372113 s, 11.0 MB/s 00:09:27.974 03:37:42 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:27.974 03:37:42 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:09:27.974 03:37:42 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:27.974 03:37:42 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:27.974 03:37:42 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:09:27.974 03:37:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:27.974 03:37:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:27.974 03:37:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:27.974 03:37:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:27.974 03:37:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:28.541 03:37:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:28.541 { 00:09:28.541 "nbd_device": "/dev/nbd0", 00:09:28.541 "bdev_name": "Malloc0" 00:09:28.541 }, 00:09:28.541 { 00:09:28.541 "nbd_device": "/dev/nbd1", 00:09:28.541 "bdev_name": "Malloc1" 00:09:28.541 } 00:09:28.541 ]' 00:09:28.541 03:37:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:28.541 { 00:09:28.541 "nbd_device": "/dev/nbd0", 00:09:28.541 "bdev_name": "Malloc0" 00:09:28.541 }, 00:09:28.541 { 00:09:28.541 "nbd_device": "/dev/nbd1", 00:09:28.541 "bdev_name": "Malloc1" 00:09:28.541 } 00:09:28.541 ]' 00:09:28.541 03:37:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:28.541 03:37:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:28.541 /dev/nbd1' 00:09:28.541 03:37:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:28.541 /dev/nbd1' 00:09:28.541 03:37:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:28.541 03:37:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:28.541 03:37:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:28.541 03:37:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:28.541 03:37:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:28.541 03:37:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:28.541 03:37:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:28.541 03:37:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:28.541 03:37:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:28.541 03:37:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:28.541 03:37:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:28.541 03:37:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:28.541 256+0 records in 00:09:28.541 256+0 records out 00:09:28.541 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00688214 s, 152 MB/s 00:09:28.541 03:37:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:28.541 03:37:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:28.541 256+0 records in 00:09:28.541 256+0 records out 00:09:28.541 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0466253 s, 22.5 MB/s 00:09:28.541 03:37:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:28.541 03:37:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:28.800 256+0 records in 00:09:28.800 256+0 records out 00:09:28.800 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.048108 s, 21.8 MB/s 00:09:28.800 03:37:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:28.800 03:37:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:28.800 03:37:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:28.800 03:37:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:28.800 03:37:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:28.800 03:37:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:28.800 03:37:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:28.800 03:37:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:28.800 03:37:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:28.800 03:37:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:28.800 03:37:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:28.800 03:37:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:28.800 03:37:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:28.800 03:37:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:28.800 03:37:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:28.800 03:37:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:28.800 03:37:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:28.800 03:37:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:28.800 03:37:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:29.058 03:37:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:29.058 03:37:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:29.058 03:37:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:29.058 03:37:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:29.058 03:37:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:29.058 03:37:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:29.058 03:37:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:29.058 03:37:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:29.058 03:37:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:29.058 03:37:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:29.624 03:37:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:29.624 03:37:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:29.624 03:37:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:29.624 03:37:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:29.624 03:37:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:29.624 03:37:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:29.624 03:37:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:29.624 03:37:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:29.624 03:37:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:29.624 03:37:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:29.624 03:37:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:29.881 03:37:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:29.881 03:37:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:29.881 03:37:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:29.881 03:37:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:29.881 03:37:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:29.881 03:37:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:29.881 03:37:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:29.881 03:37:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:29.881 03:37:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:29.881 03:37:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:29.881 03:37:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:29.881 03:37:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:29.881 03:37:44 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:30.448 03:37:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:31.818 [2024-07-26 03:37:46.389191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:31.818 [2024-07-26 03:37:46.573378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.818 [2024-07-26 03:37:46.573392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.076 [2024-07-26 03:37:46.743674] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:32.076 [2024-07-26 03:37:46.743777] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:33.447 03:37:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:33.447 spdk_app_start Round 1 00:09:33.447 03:37:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:33.448 03:37:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63894 /var/tmp/spdk-nbd.sock 00:09:33.448 03:37:48 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63894 ']' 00:09:33.448 03:37:48 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:33.448 03:37:48 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:33.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:33.448 03:37:48 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:33.448 03:37:48 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:33.448 03:37:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:33.705 03:37:48 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:33.705 03:37:48 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:09:33.705 03:37:48 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:34.270 Malloc0 00:09:34.270 03:37:48 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:34.528 Malloc1 00:09:34.528 03:37:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:34.528 03:37:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:34.528 03:37:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:34.528 03:37:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:34.528 03:37:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:34.528 03:37:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:34.528 03:37:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:34.528 03:37:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:34.528 03:37:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:34.528 03:37:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:34.528 03:37:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:34.528 03:37:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:34.528 03:37:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:34.528 03:37:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:34.528 03:37:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:34.528 03:37:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:34.791 /dev/nbd0 00:09:35.054 03:37:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:35.054 03:37:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:35.054 03:37:49 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:35.054 03:37:49 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:09:35.054 03:37:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:35.054 03:37:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:35.054 03:37:49 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:35.054 03:37:49 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:09:35.054 03:37:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:35.054 03:37:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:35.054 03:37:49 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:35.054 1+0 records in 00:09:35.054 1+0 records out 00:09:35.054 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000305323 s, 13.4 MB/s 00:09:35.054 03:37:49 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:35.054 03:37:49 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:09:35.054 03:37:49 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:35.054 03:37:49 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:35.054 03:37:49 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:09:35.054 03:37:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:35.054 03:37:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:35.054 03:37:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:35.054 /dev/nbd1 00:09:35.313 03:37:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:35.313 03:37:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:35.313 03:37:49 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:35.313 03:37:49 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:09:35.313 03:37:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:35.313 03:37:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:35.313 03:37:49 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:35.313 03:37:49 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:09:35.313 03:37:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:35.313 03:37:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:35.313 03:37:49 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:35.313 1+0 records in 00:09:35.313 1+0 records out 00:09:35.313 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400026 s, 10.2 MB/s 00:09:35.313 03:37:49 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:35.313 03:37:49 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:09:35.313 03:37:49 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:35.313 03:37:49 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:35.313 03:37:49 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:09:35.313 03:37:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:35.313 03:37:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:35.313 03:37:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:35.313 03:37:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:35.313 03:37:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:35.571 03:37:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:35.571 { 00:09:35.571 "nbd_device": "/dev/nbd0", 00:09:35.571 "bdev_name": "Malloc0" 00:09:35.571 }, 00:09:35.571 { 00:09:35.571 "nbd_device": "/dev/nbd1", 00:09:35.571 "bdev_name": "Malloc1" 00:09:35.571 } 00:09:35.571 ]' 00:09:35.571 03:37:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:35.571 03:37:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:35.571 { 00:09:35.571 "nbd_device": "/dev/nbd0", 00:09:35.571 "bdev_name": "Malloc0" 00:09:35.571 }, 00:09:35.571 { 00:09:35.571 "nbd_device": "/dev/nbd1", 00:09:35.571 "bdev_name": "Malloc1" 00:09:35.571 } 00:09:35.571 ]' 00:09:35.571 03:37:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:35.571 /dev/nbd1' 00:09:35.571 03:37:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:35.571 /dev/nbd1' 00:09:35.571 03:37:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:35.571 03:37:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:35.571 03:37:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:35.571 03:37:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:35.571 03:37:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:35.571 03:37:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:35.571 03:37:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:35.571 03:37:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:35.571 03:37:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:35.571 03:37:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:35.571 03:37:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:35.571 03:37:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:35.571 256+0 records in 00:09:35.571 256+0 records out 00:09:35.571 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00727426 s, 144 MB/s 00:09:35.571 03:37:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:35.571 03:37:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:35.571 256+0 records in 00:09:35.571 256+0 records out 00:09:35.571 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.031199 s, 33.6 MB/s 00:09:35.571 03:37:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:35.571 03:37:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:35.571 256+0 records in 00:09:35.571 256+0 records out 00:09:35.571 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0369615 s, 28.4 MB/s 00:09:35.571 03:37:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:35.571 03:37:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:35.571 03:37:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:35.571 03:37:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:35.571 03:37:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:35.571 03:37:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:35.571 03:37:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:35.571 03:37:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:35.571 03:37:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:35.571 03:37:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:35.571 03:37:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:35.571 03:37:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:35.571 03:37:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:35.572 03:37:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:35.572 03:37:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:35.572 03:37:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:35.572 03:37:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:35.572 03:37:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:35.572 03:37:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:35.849 03:37:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:35.849 03:37:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:35.849 03:37:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:35.849 03:37:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:35.849 03:37:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:35.849 03:37:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:35.849 03:37:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:35.849 03:37:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:35.849 03:37:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:35.849 03:37:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:36.115 03:37:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:36.115 03:37:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:36.115 03:37:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:36.115 03:37:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:36.115 03:37:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:36.115 03:37:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:36.373 03:37:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:36.373 03:37:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:36.373 03:37:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:36.373 03:37:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:36.373 03:37:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:36.631 03:37:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:36.631 03:37:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:36.631 03:37:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:36.631 03:37:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:36.631 03:37:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:36.631 03:37:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:36.631 03:37:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:36.631 03:37:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:36.631 03:37:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:36.631 03:37:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:36.631 03:37:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:36.631 03:37:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:36.631 03:37:51 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:37.197 03:37:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:38.570 [2024-07-26 03:37:53.247554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:38.570 [2024-07-26 03:37:53.432887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.571 [2024-07-26 03:37:53.432889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.829 [2024-07-26 03:37:53.605977] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:38.829 [2024-07-26 03:37:53.606108] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:40.205 spdk_app_start Round 2 00:09:40.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:40.205 03:37:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:40.205 03:37:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:40.205 03:37:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63894 /var/tmp/spdk-nbd.sock 00:09:40.205 03:37:55 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63894 ']' 00:09:40.205 03:37:55 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:40.205 03:37:55 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:40.205 03:37:55 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:40.205 03:37:55 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:40.205 03:37:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:40.772 03:37:55 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:40.772 03:37:55 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:09:40.772 03:37:55 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:41.030 Malloc0 00:09:41.030 03:37:55 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:41.596 Malloc1 00:09:41.596 03:37:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:41.596 03:37:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:41.596 03:37:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:41.596 03:37:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:41.596 03:37:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:41.596 03:37:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:41.596 03:37:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:41.596 03:37:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:41.596 03:37:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:41.596 03:37:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:41.596 03:37:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:41.596 03:37:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:41.596 03:37:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:41.596 03:37:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:41.596 03:37:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:41.596 03:37:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:41.854 /dev/nbd0 00:09:41.854 03:37:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:41.854 03:37:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:41.854 03:37:56 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:41.854 03:37:56 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:09:41.854 03:37:56 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:41.854 03:37:56 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:41.854 03:37:56 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:41.854 03:37:56 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:09:41.854 03:37:56 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:41.854 03:37:56 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:41.854 03:37:56 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:41.854 1+0 records in 00:09:41.854 1+0 records out 00:09:41.854 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00032913 s, 12.4 MB/s 00:09:41.854 03:37:56 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:41.854 03:37:56 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:09:41.855 03:37:56 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:41.855 03:37:56 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:41.855 03:37:56 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:09:41.855 03:37:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:41.855 03:37:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:41.855 03:37:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:42.113 /dev/nbd1 00:09:42.113 03:37:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:42.113 03:37:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:42.114 03:37:56 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:42.114 03:37:56 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:09:42.114 03:37:56 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:42.114 03:37:56 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:42.114 03:37:56 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:42.114 03:37:56 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:09:42.114 03:37:56 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:42.114 03:37:56 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:42.114 03:37:56 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:42.114 1+0 records in 00:09:42.114 1+0 records out 00:09:42.114 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316667 s, 12.9 MB/s 00:09:42.114 03:37:56 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:42.114 03:37:56 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:09:42.114 03:37:56 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:42.114 03:37:56 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:42.114 03:37:56 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:09:42.114 03:37:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:42.114 03:37:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:42.114 03:37:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:42.114 03:37:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:42.114 03:37:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:42.373 03:37:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:42.373 { 00:09:42.373 "nbd_device": "/dev/nbd0", 00:09:42.373 "bdev_name": "Malloc0" 00:09:42.373 }, 00:09:42.373 { 00:09:42.373 "nbd_device": "/dev/nbd1", 00:09:42.373 "bdev_name": "Malloc1" 00:09:42.373 } 00:09:42.373 ]' 00:09:42.373 03:37:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:42.373 { 00:09:42.373 "nbd_device": "/dev/nbd0", 00:09:42.373 "bdev_name": "Malloc0" 00:09:42.373 }, 00:09:42.373 { 00:09:42.373 "nbd_device": "/dev/nbd1", 00:09:42.373 "bdev_name": "Malloc1" 00:09:42.373 } 00:09:42.373 ]' 00:09:42.373 03:37:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:42.373 03:37:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:42.373 /dev/nbd1' 00:09:42.373 03:37:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:42.373 /dev/nbd1' 00:09:42.373 03:37:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:42.373 03:37:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:42.373 03:37:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:42.373 03:37:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:42.373 03:37:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:42.373 03:37:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:42.373 03:37:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:42.373 03:37:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:42.373 03:37:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:42.373 03:37:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:42.373 03:37:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:42.373 03:37:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:42.373 256+0 records in 00:09:42.373 256+0 records out 00:09:42.373 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00720622 s, 146 MB/s 00:09:42.373 03:37:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:42.373 03:37:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:42.373 256+0 records in 00:09:42.373 256+0 records out 00:09:42.373 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0270263 s, 38.8 MB/s 00:09:42.373 03:37:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:42.373 03:37:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:42.632 256+0 records in 00:09:42.632 256+0 records out 00:09:42.632 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0564672 s, 18.6 MB/s 00:09:42.632 03:37:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:42.632 03:37:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:42.632 03:37:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:42.632 03:37:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:42.632 03:37:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:42.632 03:37:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:42.632 03:37:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:42.632 03:37:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:42.632 03:37:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:42.632 03:37:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:42.632 03:37:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:42.632 03:37:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:42.632 03:37:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:42.632 03:37:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:42.632 03:37:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:42.632 03:37:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:42.632 03:37:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:42.632 03:37:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:42.632 03:37:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:42.890 03:37:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:42.890 03:37:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:42.890 03:37:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:42.890 03:37:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:42.890 03:37:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:42.890 03:37:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:42.890 03:37:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:42.890 03:37:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:42.890 03:37:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:42.890 03:37:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:43.457 03:37:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:43.457 03:37:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:43.457 03:37:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:43.457 03:37:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:43.457 03:37:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:43.457 03:37:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:43.457 03:37:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:43.457 03:37:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:43.457 03:37:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:43.457 03:37:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:43.457 03:37:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:43.742 03:37:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:43.742 03:37:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:43.742 03:37:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:43.742 03:37:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:43.742 03:37:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:43.743 03:37:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:43.743 03:37:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:43.743 03:37:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:43.743 03:37:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:43.743 03:37:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:43.743 03:37:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:43.743 03:37:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:43.743 03:37:58 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:44.311 03:37:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:45.246 [2024-07-26 03:38:00.125202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:45.505 [2024-07-26 03:38:00.315796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.505 [2024-07-26 03:38:00.315808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.763 [2024-07-26 03:38:00.489041] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:45.763 [2024-07-26 03:38:00.489161] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:47.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:47.137 03:38:01 event.app_repeat -- event/event.sh@38 -- # waitforlisten 63894 /var/tmp/spdk-nbd.sock 00:09:47.137 03:38:01 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63894 ']' 00:09:47.137 03:38:01 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:47.137 03:38:01 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:47.137 03:38:01 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:47.137 03:38:01 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:47.137 03:38:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:47.704 03:38:02 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:47.704 03:38:02 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:09:47.704 03:38:02 event.app_repeat -- event/event.sh@39 -- # killprocess 63894 00:09:47.704 03:38:02 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 63894 ']' 00:09:47.704 03:38:02 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 63894 00:09:47.704 03:38:02 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:09:47.704 03:38:02 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:47.704 03:38:02 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63894 00:09:47.704 killing process with pid 63894 00:09:47.704 03:38:02 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:47.704 03:38:02 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:47.704 03:38:02 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63894' 00:09:47.704 03:38:02 event.app_repeat -- common/autotest_common.sh@967 -- # kill 63894 00:09:47.704 03:38:02 event.app_repeat -- common/autotest_common.sh@972 -- # wait 63894 00:09:48.638 spdk_app_start is called in Round 0. 00:09:48.638 Shutdown signal received, stop current app iteration 00:09:48.638 Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 reinitialization... 00:09:48.638 spdk_app_start is called in Round 1. 00:09:48.638 Shutdown signal received, stop current app iteration 00:09:48.638 Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 reinitialization... 00:09:48.638 spdk_app_start is called in Round 2. 00:09:48.638 Shutdown signal received, stop current app iteration 00:09:48.638 Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 reinitialization... 00:09:48.638 spdk_app_start is called in Round 3. 00:09:48.638 Shutdown signal received, stop current app iteration 00:09:48.638 03:38:03 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:48.638 03:38:03 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:48.638 00:09:48.638 real 0m23.048s 00:09:48.638 user 0m50.944s 00:09:48.638 sys 0m3.127s 00:09:48.638 03:38:03 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:48.638 03:38:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:48.638 ************************************ 00:09:48.638 END TEST app_repeat 00:09:48.638 ************************************ 00:09:48.638 03:38:03 event -- common/autotest_common.sh@1142 -- # return 0 00:09:48.638 03:38:03 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:48.638 03:38:03 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:48.638 03:38:03 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:48.638 03:38:03 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:48.638 03:38:03 event -- common/autotest_common.sh@10 -- # set +x 00:09:48.638 ************************************ 00:09:48.638 START TEST cpu_locks 00:09:48.638 ************************************ 00:09:48.638 03:38:03 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:48.896 * Looking for test storage... 00:09:48.896 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:48.896 03:38:03 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:48.896 03:38:03 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:48.896 03:38:03 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:48.896 03:38:03 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:48.896 03:38:03 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:48.896 03:38:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:48.896 03:38:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:48.896 ************************************ 00:09:48.896 START TEST default_locks 00:09:48.896 ************************************ 00:09:48.896 03:38:03 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:09:48.896 03:38:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=64372 00:09:48.896 03:38:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 64372 00:09:48.896 03:38:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:48.896 03:38:03 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 64372 ']' 00:09:48.896 03:38:03 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.896 03:38:03 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:48.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.896 03:38:03 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.896 03:38:03 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:48.896 03:38:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:48.896 [2024-07-26 03:38:03.727921] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:09:48.896 [2024-07-26 03:38:03.728199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64372 ] 00:09:49.155 [2024-07-26 03:38:03.915031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.413 [2024-07-26 03:38:04.149193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.345 03:38:04 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:50.345 03:38:04 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:09:50.345 03:38:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 64372 00:09:50.345 03:38:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 64372 00:09:50.346 03:38:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:50.603 03:38:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 64372 00:09:50.603 03:38:05 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 64372 ']' 00:09:50.603 03:38:05 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 64372 00:09:50.603 03:38:05 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:09:50.603 03:38:05 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:50.603 03:38:05 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64372 00:09:50.603 killing process with pid 64372 00:09:50.603 03:38:05 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:50.603 03:38:05 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:50.603 03:38:05 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64372' 00:09:50.603 03:38:05 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 64372 00:09:50.603 03:38:05 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 64372 00:09:53.132 03:38:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 64372 00:09:53.132 03:38:07 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:09:53.132 03:38:07 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 64372 00:09:53.132 03:38:07 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:09:53.132 03:38:07 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:53.132 03:38:07 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:09:53.132 03:38:07 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:53.132 03:38:07 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 64372 00:09:53.132 03:38:07 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 64372 ']' 00:09:53.132 03:38:07 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.132 03:38:07 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:53.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.132 03:38:07 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.132 03:38:07 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:53.132 ERROR: process (pid: 64372) is no longer running 00:09:53.132 03:38:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:53.132 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (64372) - No such process 00:09:53.132 03:38:07 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:53.132 03:38:07 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:09:53.132 03:38:07 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:09:53.132 03:38:07 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:53.132 03:38:07 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:53.132 03:38:07 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:53.132 03:38:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:09:53.132 03:38:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:53.132 03:38:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:09:53.132 03:38:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:53.132 00:09:53.132 real 0m3.983s 00:09:53.132 user 0m4.136s 00:09:53.132 sys 0m0.676s 00:09:53.132 ************************************ 00:09:53.132 END TEST default_locks 00:09:53.132 ************************************ 00:09:53.132 03:38:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:53.132 03:38:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:53.132 03:38:07 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:09:53.132 03:38:07 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:53.132 03:38:07 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:53.132 03:38:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:53.132 03:38:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:53.132 ************************************ 00:09:53.132 START TEST default_locks_via_rpc 00:09:53.132 ************************************ 00:09:53.132 03:38:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:09:53.132 03:38:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=64442 00:09:53.132 03:38:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 64442 00:09:53.132 03:38:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:53.132 03:38:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64442 ']' 00:09:53.132 03:38:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.132 03:38:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:53.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.132 03:38:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.132 03:38:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:53.132 03:38:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.132 [2024-07-26 03:38:07.753387] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:09:53.132 [2024-07-26 03:38:07.754259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64442 ] 00:09:53.132 [2024-07-26 03:38:07.936725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.390 [2024-07-26 03:38:08.130381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.956 03:38:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:53.956 03:38:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:53.956 03:38:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:53.956 03:38:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.956 03:38:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.956 03:38:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.956 03:38:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:09:53.956 03:38:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:53.956 03:38:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:09:53.956 03:38:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:53.956 03:38:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:53.957 03:38:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.957 03:38:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.957 03:38:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.957 03:38:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 64442 00:09:53.957 03:38:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 64442 00:09:53.957 03:38:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:54.522 03:38:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 64442 00:09:54.522 03:38:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 64442 ']' 00:09:54.522 03:38:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 64442 00:09:54.522 03:38:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:09:54.522 03:38:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:54.522 03:38:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64442 00:09:54.522 killing process with pid 64442 00:09:54.522 03:38:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:54.522 03:38:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:54.522 03:38:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64442' 00:09:54.522 03:38:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 64442 00:09:54.522 03:38:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 64442 00:09:57.052 00:09:57.052 real 0m3.869s 00:09:57.052 user 0m4.005s 00:09:57.052 sys 0m0.637s 00:09:57.052 ************************************ 00:09:57.052 END TEST default_locks_via_rpc 00:09:57.052 ************************************ 00:09:57.052 03:38:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:57.052 03:38:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.052 03:38:11 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:09:57.052 03:38:11 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:57.052 03:38:11 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:57.052 03:38:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:57.052 03:38:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:57.052 ************************************ 00:09:57.052 START TEST non_locking_app_on_locked_coremask 00:09:57.052 ************************************ 00:09:57.052 03:38:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:09:57.052 03:38:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=64516 00:09:57.052 03:38:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:57.052 03:38:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 64516 /var/tmp/spdk.sock 00:09:57.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.052 03:38:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64516 ']' 00:09:57.052 03:38:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.052 03:38:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:57.052 03:38:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.052 03:38:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:57.052 03:38:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:57.052 [2024-07-26 03:38:11.631995] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:09:57.052 [2024-07-26 03:38:11.632164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64516 ] 00:09:57.052 [2024-07-26 03:38:11.795494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.311 [2024-07-26 03:38:12.055874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:58.247 03:38:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:58.247 03:38:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:09:58.247 03:38:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=64532 00:09:58.247 03:38:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:58.247 03:38:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 64532 /var/tmp/spdk2.sock 00:09:58.247 03:38:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64532 ']' 00:09:58.247 03:38:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:58.247 03:38:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:58.247 03:38:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:58.247 03:38:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:58.247 03:38:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:58.247 [2024-07-26 03:38:13.019786] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:09:58.247 [2024-07-26 03:38:13.020015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64532 ] 00:09:58.505 [2024-07-26 03:38:13.208367] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:58.505 [2024-07-26 03:38:13.208477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.782 [2024-07-26 03:38:13.609259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.316 03:38:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:01.316 03:38:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:10:01.316 03:38:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 64516 00:10:01.316 03:38:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64516 00:10:01.316 03:38:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:02.312 03:38:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 64516 00:10:02.312 03:38:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64516 ']' 00:10:02.312 03:38:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 64516 00:10:02.312 03:38:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:10:02.312 03:38:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:02.312 03:38:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64516 00:10:02.312 killing process with pid 64516 00:10:02.312 03:38:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:02.312 03:38:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:02.312 03:38:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64516' 00:10:02.312 03:38:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 64516 00:10:02.312 03:38:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 64516 00:10:07.583 03:38:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 64532 00:10:07.583 03:38:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64532 ']' 00:10:07.583 03:38:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 64532 00:10:07.583 03:38:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:10:07.583 03:38:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:07.583 03:38:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64532 00:10:07.583 killing process with pid 64532 00:10:07.583 03:38:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:07.583 03:38:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:07.583 03:38:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64532' 00:10:07.583 03:38:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 64532 00:10:07.583 03:38:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 64532 00:10:08.968 ************************************ 00:10:08.968 END TEST non_locking_app_on_locked_coremask 00:10:08.968 ************************************ 00:10:08.968 00:10:08.968 real 0m12.139s 00:10:08.968 user 0m13.006s 00:10:08.968 sys 0m1.382s 00:10:08.968 03:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:08.968 03:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:08.968 03:38:23 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:10:08.968 03:38:23 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:10:08.968 03:38:23 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:08.968 03:38:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:08.968 03:38:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:08.968 ************************************ 00:10:08.968 START TEST locking_app_on_unlocked_coremask 00:10:08.968 ************************************ 00:10:08.968 03:38:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:10:08.968 03:38:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=64690 00:10:08.968 03:38:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 64690 /var/tmp/spdk.sock 00:10:08.968 03:38:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64690 ']' 00:10:08.968 03:38:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:10:08.968 03:38:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.968 03:38:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:08.968 03:38:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.968 03:38:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:08.968 03:38:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:09.225 [2024-07-26 03:38:23.889902] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:10:09.225 [2024-07-26 03:38:23.890187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64690 ] 00:10:09.226 [2024-07-26 03:38:24.082920] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:09.226 [2024-07-26 03:38:24.083033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.483 [2024-07-26 03:38:24.272369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.415 03:38:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:10.415 03:38:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:10:10.415 03:38:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=64707 00:10:10.415 03:38:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:10.415 03:38:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 64707 /var/tmp/spdk2.sock 00:10:10.415 03:38:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64707 ']' 00:10:10.415 03:38:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:10.415 03:38:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:10.415 03:38:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:10.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:10.415 03:38:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:10.415 03:38:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:10.415 [2024-07-26 03:38:25.110657] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:10:10.415 [2024-07-26 03:38:25.111236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64707 ] 00:10:10.415 [2024-07-26 03:38:25.303156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.980 [2024-07-26 03:38:25.703346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.354 03:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:12.355 03:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:10:12.355 03:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 64707 00:10:12.355 03:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64707 00:10:12.355 03:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:13.289 03:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 64690 00:10:13.289 03:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64690 ']' 00:10:13.289 03:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 64690 00:10:13.289 03:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:10:13.289 03:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:13.289 03:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64690 00:10:13.289 killing process with pid 64690 00:10:13.289 03:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:13.289 03:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:13.289 03:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64690' 00:10:13.289 03:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 64690 00:10:13.289 03:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 64690 00:10:17.501 03:38:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 64707 00:10:17.501 03:38:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64707 ']' 00:10:17.501 03:38:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 64707 00:10:17.501 03:38:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:10:17.501 03:38:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:17.501 03:38:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64707 00:10:17.501 03:38:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:17.501 03:38:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:17.501 killing process with pid 64707 00:10:17.501 03:38:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64707' 00:10:17.501 03:38:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 64707 00:10:17.501 03:38:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 64707 00:10:20.036 00:10:20.036 real 0m10.677s 00:10:20.036 user 0m11.307s 00:10:20.036 sys 0m1.240s 00:10:20.036 03:38:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:20.037 03:38:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:20.037 ************************************ 00:10:20.037 END TEST locking_app_on_unlocked_coremask 00:10:20.037 ************************************ 00:10:20.037 03:38:34 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:10:20.037 03:38:34 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:10:20.037 03:38:34 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:20.037 03:38:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:20.037 03:38:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:20.037 ************************************ 00:10:20.037 START TEST locking_app_on_locked_coremask 00:10:20.037 ************************************ 00:10:20.037 03:38:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:10:20.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.037 03:38:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=64842 00:10:20.037 03:38:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:20.037 03:38:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 64842 /var/tmp/spdk.sock 00:10:20.037 03:38:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64842 ']' 00:10:20.037 03:38:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.037 03:38:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:20.037 03:38:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.037 03:38:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:20.037 03:38:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:20.037 [2024-07-26 03:38:34.573656] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:10:20.037 [2024-07-26 03:38:34.573811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64842 ] 00:10:20.037 [2024-07-26 03:38:34.733656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.037 [2024-07-26 03:38:34.920174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.973 03:38:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:20.973 03:38:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:10:20.973 03:38:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=64858 00:10:20.973 03:38:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:20.973 03:38:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 64858 /var/tmp/spdk2.sock 00:10:20.973 03:38:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:10:20.973 03:38:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 64858 /var/tmp/spdk2.sock 00:10:20.974 03:38:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:10:20.974 03:38:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:20.974 03:38:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:10:20.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:20.974 03:38:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:20.974 03:38:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 64858 /var/tmp/spdk2.sock 00:10:20.974 03:38:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64858 ']' 00:10:20.974 03:38:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:20.974 03:38:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:20.974 03:38:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:20.974 03:38:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:20.974 03:38:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:20.974 [2024-07-26 03:38:35.838271] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:10:20.974 [2024-07-26 03:38:35.838518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64858 ] 00:10:21.232 [2024-07-26 03:38:36.036666] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 64842 has claimed it. 00:10:21.232 [2024-07-26 03:38:36.036791] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:21.806 ERROR: process (pid: 64858) is no longer running 00:10:21.806 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (64858) - No such process 00:10:21.806 03:38:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:21.806 03:38:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:10:21.806 03:38:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:10:21.806 03:38:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:21.806 03:38:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:21.806 03:38:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:21.806 03:38:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 64842 00:10:21.806 03:38:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64842 00:10:21.806 03:38:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:22.092 03:38:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 64842 00:10:22.092 03:38:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64842 ']' 00:10:22.092 03:38:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 64842 00:10:22.092 03:38:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:10:22.092 03:38:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:22.092 03:38:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64842 00:10:22.092 killing process with pid 64842 00:10:22.092 03:38:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:22.092 03:38:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:22.092 03:38:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64842' 00:10:22.092 03:38:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 64842 00:10:22.092 03:38:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 64842 00:10:24.621 ************************************ 00:10:24.621 END TEST locking_app_on_locked_coremask 00:10:24.621 ************************************ 00:10:24.621 00:10:24.621 real 0m4.762s 00:10:24.621 user 0m5.456s 00:10:24.621 sys 0m0.783s 00:10:24.621 03:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:24.621 03:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:24.621 03:38:39 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:10:24.621 03:38:39 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:10:24.621 03:38:39 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:24.621 03:38:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:24.621 03:38:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:24.621 ************************************ 00:10:24.621 START TEST locking_overlapped_coremask 00:10:24.621 ************************************ 00:10:24.621 03:38:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:10:24.621 03:38:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=64928 00:10:24.621 03:38:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:10:24.622 03:38:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 64928 /var/tmp/spdk.sock 00:10:24.622 03:38:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 64928 ']' 00:10:24.622 03:38:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.622 03:38:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:24.622 03:38:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.622 03:38:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:24.622 03:38:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:24.622 [2024-07-26 03:38:39.381513] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:10:24.622 [2024-07-26 03:38:39.382027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64928 ] 00:10:24.879 [2024-07-26 03:38:39.571716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:25.137 [2024-07-26 03:38:39.850232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:25.137 [2024-07-26 03:38:39.850292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.137 [2024-07-26 03:38:39.850295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:26.071 03:38:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:26.071 03:38:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:10:26.071 03:38:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=64946 00:10:26.071 03:38:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:10:26.071 03:38:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 64946 /var/tmp/spdk2.sock 00:10:26.071 03:38:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:10:26.071 03:38:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 64946 /var/tmp/spdk2.sock 00:10:26.071 03:38:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:10:26.071 03:38:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:26.071 03:38:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:10:26.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:26.071 03:38:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:26.071 03:38:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 64946 /var/tmp/spdk2.sock 00:10:26.071 03:38:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 64946 ']' 00:10:26.071 03:38:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:26.071 03:38:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:26.071 03:38:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:26.071 03:38:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:26.071 03:38:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:26.071 [2024-07-26 03:38:40.787518] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:10:26.071 [2024-07-26 03:38:40.787724] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64946 ] 00:10:26.329 [2024-07-26 03:38:40.996877] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64928 has claimed it. 00:10:26.329 [2024-07-26 03:38:40.997902] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:26.587 ERROR: process (pid: 64946) is no longer running 00:10:26.587 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (64946) - No such process 00:10:26.587 03:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:26.587 03:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:10:26.587 03:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:10:26.587 03:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:26.587 03:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:26.587 03:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:26.587 03:38:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:10:26.587 03:38:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:26.587 03:38:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:26.587 03:38:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:26.587 03:38:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 64928 00:10:26.587 03:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 64928 ']' 00:10:26.587 03:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 64928 00:10:26.587 03:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:10:26.587 03:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:26.587 03:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64928 00:10:26.587 killing process with pid 64928 00:10:26.587 03:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:26.587 03:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:26.587 03:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64928' 00:10:26.587 03:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 64928 00:10:26.587 03:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 64928 00:10:29.112 ************************************ 00:10:29.112 END TEST locking_overlapped_coremask 00:10:29.112 ************************************ 00:10:29.112 00:10:29.112 real 0m4.554s 00:10:29.112 user 0m11.858s 00:10:29.112 sys 0m0.650s 00:10:29.112 03:38:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:29.112 03:38:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:29.112 03:38:43 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:10:29.112 03:38:43 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:29.112 03:38:43 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:29.112 03:38:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:29.112 03:38:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:29.112 ************************************ 00:10:29.112 START TEST locking_overlapped_coremask_via_rpc 00:10:29.112 ************************************ 00:10:29.112 03:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:10:29.112 03:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=65010 00:10:29.112 03:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:29.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.112 03:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 65010 /var/tmp/spdk.sock 00:10:29.112 03:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 65010 ']' 00:10:29.112 03:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.112 03:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:29.112 03:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.112 03:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:29.112 03:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:29.112 [2024-07-26 03:38:43.932161] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:10:29.112 [2024-07-26 03:38:43.932345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65010 ] 00:10:29.370 [2024-07-26 03:38:44.095273] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:29.370 [2024-07-26 03:38:44.095391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:29.627 [2024-07-26 03:38:44.321990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.628 [2024-07-26 03:38:44.322071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.628 [2024-07-26 03:38:44.322081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:30.195 03:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:30.195 03:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:10:30.195 03:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=65039 00:10:30.195 03:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:30.195 03:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 65039 /var/tmp/spdk2.sock 00:10:30.195 03:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 65039 ']' 00:10:30.195 03:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:30.195 03:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:30.195 03:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:30.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:30.195 03:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:30.195 03:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:30.453 [2024-07-26 03:38:45.161192] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:10:30.453 [2024-07-26 03:38:45.161600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65039 ] 00:10:30.453 [2024-07-26 03:38:45.338436] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:30.453 [2024-07-26 03:38:45.341847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:31.019 [2024-07-26 03:38:45.792473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:31.019 [2024-07-26 03:38:45.795977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:31.019 [2024-07-26 03:38:45.795990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:33.549 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:33.549 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:10:33.549 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:33.549 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.549 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.549 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.549 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:33.549 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:10:33.549 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:33.549 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:10:33.549 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:33.549 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:10:33.549 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:33.549 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:33.549 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.549 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.549 [2024-07-26 03:38:48.096200] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 65010 has claimed it. 00:10:33.549 request: 00:10:33.549 { 00:10:33.549 "method": "framework_enable_cpumask_locks", 00:10:33.549 "req_id": 1 00:10:33.549 } 00:10:33.549 Got JSON-RPC error response 00:10:33.549 response: 00:10:33.549 { 00:10:33.549 "code": -32603, 00:10:33.549 "message": "Failed to claim CPU core: 2" 00:10:33.549 } 00:10:33.549 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:10:33.550 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:10:33.550 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:33.550 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:33.550 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:33.550 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 65010 /var/tmp/spdk.sock 00:10:33.550 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 65010 ']' 00:10:33.550 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.550 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:33.550 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.550 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:33.550 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.550 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:33.550 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:10:33.550 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 65039 /var/tmp/spdk2.sock 00:10:33.550 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 65039 ']' 00:10:33.550 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:33.550 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:33.550 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:33.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:33.550 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:33.550 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.808 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:33.808 ************************************ 00:10:33.808 END TEST locking_overlapped_coremask_via_rpc 00:10:33.808 ************************************ 00:10:33.808 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:10:33.808 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:33.808 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:33.808 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:33.808 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:33.808 00:10:33.808 real 0m4.857s 00:10:33.808 user 0m2.004s 00:10:33.808 sys 0m0.240s 00:10:33.808 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:33.808 03:38:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:34.066 03:38:48 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:10:34.066 03:38:48 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:10:34.066 03:38:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 65010 ]] 00:10:34.066 03:38:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 65010 00:10:34.066 03:38:48 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 65010 ']' 00:10:34.066 03:38:48 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 65010 00:10:34.066 03:38:48 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:10:34.066 03:38:48 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:34.066 03:38:48 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65010 00:10:34.066 killing process with pid 65010 00:10:34.067 03:38:48 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:34.067 03:38:48 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:34.067 03:38:48 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65010' 00:10:34.067 03:38:48 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 65010 00:10:34.067 03:38:48 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 65010 00:10:36.598 03:38:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 65039 ]] 00:10:36.598 03:38:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 65039 00:10:36.598 03:38:51 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 65039 ']' 00:10:36.598 03:38:51 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 65039 00:10:36.598 03:38:51 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:10:36.598 03:38:51 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:36.598 03:38:51 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65039 00:10:36.598 killing process with pid 65039 00:10:36.598 03:38:51 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:10:36.598 03:38:51 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:10:36.598 03:38:51 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65039' 00:10:36.598 03:38:51 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 65039 00:10:36.598 03:38:51 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 65039 00:10:39.128 03:38:53 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:39.128 03:38:53 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:10:39.128 03:38:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 65010 ]] 00:10:39.128 03:38:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 65010 00:10:39.128 03:38:53 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 65010 ']' 00:10:39.128 03:38:53 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 65010 00:10:39.128 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (65010) - No such process 00:10:39.128 03:38:53 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 65010 is not found' 00:10:39.128 Process with pid 65010 is not found 00:10:39.128 Process with pid 65039 is not found 00:10:39.128 03:38:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 65039 ]] 00:10:39.128 03:38:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 65039 00:10:39.128 03:38:53 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 65039 ']' 00:10:39.128 03:38:53 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 65039 00:10:39.128 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (65039) - No such process 00:10:39.129 03:38:53 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 65039 is not found' 00:10:39.129 03:38:53 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:39.129 ************************************ 00:10:39.129 END TEST cpu_locks 00:10:39.129 ************************************ 00:10:39.129 00:10:39.129 real 0m50.240s 00:10:39.129 user 1m29.642s 00:10:39.129 sys 0m6.582s 00:10:39.129 03:38:53 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:39.129 03:38:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:39.129 03:38:53 event -- common/autotest_common.sh@1142 -- # return 0 00:10:39.129 ************************************ 00:10:39.129 END TEST event 00:10:39.129 ************************************ 00:10:39.129 00:10:39.129 real 1m24.896s 00:10:39.129 user 2m38.272s 00:10:39.129 sys 0m10.664s 00:10:39.129 03:38:53 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:39.129 03:38:53 event -- common/autotest_common.sh@10 -- # set +x 00:10:39.129 03:38:53 -- common/autotest_common.sh@1142 -- # return 0 00:10:39.129 03:38:53 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:39.129 03:38:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:39.129 03:38:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:39.129 03:38:53 -- common/autotest_common.sh@10 -- # set +x 00:10:39.129 ************************************ 00:10:39.129 START TEST thread 00:10:39.129 ************************************ 00:10:39.129 03:38:53 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:39.129 * Looking for test storage... 00:10:39.129 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:10:39.129 03:38:53 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:39.129 03:38:53 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:10:39.129 03:38:53 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:39.129 03:38:53 thread -- common/autotest_common.sh@10 -- # set +x 00:10:39.129 ************************************ 00:10:39.129 START TEST thread_poller_perf 00:10:39.129 ************************************ 00:10:39.129 03:38:53 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:39.129 [2024-07-26 03:38:53.938186] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:10:39.129 [2024-07-26 03:38:53.938703] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65226 ] 00:10:39.387 [2024-07-26 03:38:54.123638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.646 [2024-07-26 03:38:54.329859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.646 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:41.070 ====================================== 00:10:41.070 busy:2216065253 (cyc) 00:10:41.070 total_run_count: 279000 00:10:41.070 tsc_hz: 2200000000 (cyc) 00:10:41.070 ====================================== 00:10:41.070 poller_cost: 7942 (cyc), 3610 (nsec) 00:10:41.070 00:10:41.070 real 0m1.882s 00:10:41.070 user 0m1.650s 00:10:41.070 sys 0m0.116s 00:10:41.070 03:38:55 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:41.070 03:38:55 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:41.070 ************************************ 00:10:41.070 END TEST thread_poller_perf 00:10:41.070 ************************************ 00:10:41.070 03:38:55 thread -- common/autotest_common.sh@1142 -- # return 0 00:10:41.070 03:38:55 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:41.070 03:38:55 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:10:41.070 03:38:55 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:41.070 03:38:55 thread -- common/autotest_common.sh@10 -- # set +x 00:10:41.070 ************************************ 00:10:41.070 START TEST thread_poller_perf 00:10:41.070 ************************************ 00:10:41.070 03:38:55 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:41.070 [2024-07-26 03:38:55.860529] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:10:41.070 [2024-07-26 03:38:55.860692] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65268 ] 00:10:41.328 [2024-07-26 03:38:56.036082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.587 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:41.587 [2024-07-26 03:38:56.306670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.964 ====================================== 00:10:42.964 busy:2203914489 (cyc) 00:10:42.964 total_run_count: 3550000 00:10:42.964 tsc_hz: 2200000000 (cyc) 00:10:42.964 ====================================== 00:10:42.964 poller_cost: 620 (cyc), 281 (nsec) 00:10:42.964 ************************************ 00:10:42.964 END TEST thread_poller_perf 00:10:42.964 ************************************ 00:10:42.964 00:10:42.964 real 0m1.922s 00:10:42.964 user 0m1.696s 00:10:42.964 sys 0m0.108s 00:10:42.964 03:38:57 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:42.964 03:38:57 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:42.964 03:38:57 thread -- common/autotest_common.sh@1142 -- # return 0 00:10:42.964 03:38:57 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:10:42.964 00:10:42.964 real 0m3.958s 00:10:42.964 user 0m3.398s 00:10:42.964 sys 0m0.325s 00:10:42.964 ************************************ 00:10:42.964 END TEST thread 00:10:42.964 ************************************ 00:10:42.964 03:38:57 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:42.964 03:38:57 thread -- common/autotest_common.sh@10 -- # set +x 00:10:42.964 03:38:57 -- common/autotest_common.sh@1142 -- # return 0 00:10:42.964 03:38:57 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:10:42.964 03:38:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:42.964 03:38:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:42.964 03:38:57 -- common/autotest_common.sh@10 -- # set +x 00:10:42.964 ************************************ 00:10:42.964 START TEST accel 00:10:42.964 ************************************ 00:10:42.964 03:38:57 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:10:43.224 * Looking for test storage... 00:10:43.224 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:10:43.224 03:38:57 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:10:43.224 03:38:57 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:10:43.224 03:38:57 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:43.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.224 03:38:57 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=65349 00:10:43.224 03:38:57 accel -- accel/accel.sh@63 -- # waitforlisten 65349 00:10:43.224 03:38:57 accel -- common/autotest_common.sh@829 -- # '[' -z 65349 ']' 00:10:43.224 03:38:57 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.224 03:38:57 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:43.224 03:38:57 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.224 03:38:57 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:10:43.224 03:38:57 accel -- accel/accel.sh@61 -- # build_accel_config 00:10:43.224 03:38:57 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:43.224 03:38:57 accel -- common/autotest_common.sh@10 -- # set +x 00:10:43.224 03:38:57 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:43.224 03:38:57 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:43.224 03:38:57 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:43.224 03:38:57 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:43.224 03:38:57 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:43.224 03:38:57 accel -- accel/accel.sh@40 -- # local IFS=, 00:10:43.224 03:38:57 accel -- accel/accel.sh@41 -- # jq -r . 00:10:43.224 [2024-07-26 03:38:57.985748] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:10:43.224 [2024-07-26 03:38:57.986158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65349 ] 00:10:43.483 [2024-07-26 03:38:58.149215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.483 [2024-07-26 03:38:58.358038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.418 03:38:59 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:44.418 03:38:59 accel -- common/autotest_common.sh@862 -- # return 0 00:10:44.418 03:38:59 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:10:44.418 03:38:59 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:10:44.418 03:38:59 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:10:44.418 03:38:59 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:10:44.418 03:38:59 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:10:44.418 03:38:59 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:10:44.418 03:38:59 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.418 03:38:59 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:10:44.418 03:38:59 accel -- common/autotest_common.sh@10 -- # set +x 00:10:44.418 03:38:59 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.418 03:38:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:44.418 03:38:59 accel -- accel/accel.sh@72 -- # IFS== 00:10:44.418 03:38:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:44.418 03:38:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:44.418 03:38:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:44.418 03:38:59 accel -- accel/accel.sh@72 -- # IFS== 00:10:44.418 03:38:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:44.418 03:38:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:44.418 03:38:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:44.418 03:38:59 accel -- accel/accel.sh@72 -- # IFS== 00:10:44.418 03:38:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:44.418 03:38:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:44.418 03:38:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:44.418 03:38:59 accel -- accel/accel.sh@72 -- # IFS== 00:10:44.418 03:38:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:44.418 03:38:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:44.418 03:38:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:44.418 03:38:59 accel -- accel/accel.sh@72 -- # IFS== 00:10:44.418 03:38:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:44.418 03:38:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:44.418 03:38:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:44.418 03:38:59 accel -- accel/accel.sh@72 -- # IFS== 00:10:44.418 03:38:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:44.418 03:38:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:44.418 03:38:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:44.418 03:38:59 accel -- accel/accel.sh@72 -- # IFS== 00:10:44.418 03:38:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:44.418 03:38:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:44.418 03:38:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:44.418 03:38:59 accel -- accel/accel.sh@72 -- # IFS== 00:10:44.418 03:38:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:44.418 03:38:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:44.418 03:38:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:44.418 03:38:59 accel -- accel/accel.sh@72 -- # IFS== 00:10:44.418 03:38:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:44.418 03:38:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:44.418 03:38:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:44.418 03:38:59 accel -- accel/accel.sh@72 -- # IFS== 00:10:44.418 03:38:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:44.418 03:38:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:44.418 03:38:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:44.418 03:38:59 accel -- accel/accel.sh@72 -- # IFS== 00:10:44.418 03:38:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:44.418 03:38:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:44.418 03:38:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:44.418 03:38:59 accel -- accel/accel.sh@72 -- # IFS== 00:10:44.418 03:38:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:44.418 03:38:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:44.418 03:38:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:44.418 03:38:59 accel -- accel/accel.sh@72 -- # IFS== 00:10:44.418 03:38:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:44.418 03:38:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:44.418 03:38:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:44.418 03:38:59 accel -- accel/accel.sh@72 -- # IFS== 00:10:44.418 03:38:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:44.418 03:38:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:44.418 03:38:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:44.418 03:38:59 accel -- accel/accel.sh@72 -- # IFS== 00:10:44.418 03:38:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:44.418 03:38:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:44.418 03:38:59 accel -- accel/accel.sh@75 -- # killprocess 65349 00:10:44.418 03:38:59 accel -- common/autotest_common.sh@948 -- # '[' -z 65349 ']' 00:10:44.418 03:38:59 accel -- common/autotest_common.sh@952 -- # kill -0 65349 00:10:44.418 03:38:59 accel -- common/autotest_common.sh@953 -- # uname 00:10:44.418 03:38:59 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:44.418 03:38:59 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65349 00:10:44.418 03:38:59 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:44.418 03:38:59 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:44.418 03:38:59 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65349' 00:10:44.418 killing process with pid 65349 00:10:44.418 03:38:59 accel -- common/autotest_common.sh@967 -- # kill 65349 00:10:44.418 03:38:59 accel -- common/autotest_common.sh@972 -- # wait 65349 00:10:46.950 03:39:01 accel -- accel/accel.sh@76 -- # trap - ERR 00:10:46.950 03:39:01 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:10:46.950 03:39:01 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:46.950 03:39:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:46.950 03:39:01 accel -- common/autotest_common.sh@10 -- # set +x 00:10:46.950 03:39:01 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:10:46.950 03:39:01 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:10:46.950 03:39:01 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:10:46.950 03:39:01 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:46.950 03:39:01 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:46.950 03:39:01 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:46.950 03:39:01 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:46.950 03:39:01 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:46.950 03:39:01 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:10:46.950 03:39:01 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:10:46.950 03:39:01 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:46.950 03:39:01 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:10:46.950 03:39:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:46.950 03:39:01 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:10:46.950 03:39:01 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:10:46.950 03:39:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:46.950 03:39:01 accel -- common/autotest_common.sh@10 -- # set +x 00:10:46.950 ************************************ 00:10:46.950 START TEST accel_missing_filename 00:10:46.950 ************************************ 00:10:46.950 03:39:01 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:10:46.950 03:39:01 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:10:46.950 03:39:01 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:10:46.950 03:39:01 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:10:46.950 03:39:01 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:46.950 03:39:01 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:10:46.950 03:39:01 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:46.950 03:39:01 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:10:46.950 03:39:01 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:10:46.950 03:39:01 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:10:46.950 03:39:01 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:46.950 03:39:01 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:46.950 03:39:01 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:46.950 03:39:01 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:46.950 03:39:01 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:46.950 03:39:01 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:10:46.950 03:39:01 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:10:46.950 [2024-07-26 03:39:01.562296] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:10:46.950 [2024-07-26 03:39:01.562530] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65425 ] 00:10:46.950 [2024-07-26 03:39:01.751477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.209 [2024-07-26 03:39:02.011598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.468 [2024-07-26 03:39:02.200707] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:48.035 [2024-07-26 03:39:02.674504] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:10:48.293 A filename is required. 00:10:48.293 ************************************ 00:10:48.293 END TEST accel_missing_filename 00:10:48.293 ************************************ 00:10:48.293 03:39:03 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:10:48.293 03:39:03 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:48.293 03:39:03 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:10:48.293 03:39:03 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:10:48.293 03:39:03 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:10:48.293 03:39:03 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:48.293 00:10:48.293 real 0m1.584s 00:10:48.293 user 0m1.343s 00:10:48.293 sys 0m0.177s 00:10:48.293 03:39:03 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:48.293 03:39:03 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:10:48.293 03:39:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:48.293 03:39:03 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:48.293 03:39:03 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:10:48.293 03:39:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:48.293 03:39:03 accel -- common/autotest_common.sh@10 -- # set +x 00:10:48.293 ************************************ 00:10:48.293 START TEST accel_compress_verify 00:10:48.293 ************************************ 00:10:48.293 03:39:03 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:48.293 03:39:03 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:10:48.293 03:39:03 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:48.293 03:39:03 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:10:48.293 03:39:03 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:48.293 03:39:03 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:10:48.293 03:39:03 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:48.293 03:39:03 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:48.293 03:39:03 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:48.293 03:39:03 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:10:48.293 03:39:03 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:48.293 03:39:03 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:48.293 03:39:03 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:48.293 03:39:03 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:48.293 03:39:03 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:48.293 03:39:03 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:10:48.293 03:39:03 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:10:48.293 [2024-07-26 03:39:03.173587] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:10:48.293 [2024-07-26 03:39:03.173744] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65460 ] 00:10:48.568 [2024-07-26 03:39:03.340351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.837 [2024-07-26 03:39:03.537377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.837 [2024-07-26 03:39:03.723738] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:49.403 [2024-07-26 03:39:04.182305] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:10:49.970 00:10:49.970 Compression does not support the verify option, aborting. 00:10:49.970 03:39:04 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:10:49.970 03:39:04 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:49.970 03:39:04 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:10:49.970 ************************************ 00:10:49.970 END TEST accel_compress_verify 00:10:49.970 ************************************ 00:10:49.970 03:39:04 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:10:49.970 03:39:04 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:10:49.970 03:39:04 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:49.970 00:10:49.970 real 0m1.458s 00:10:49.970 user 0m1.244s 00:10:49.970 sys 0m0.150s 00:10:49.970 03:39:04 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:49.970 03:39:04 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:10:49.970 03:39:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:49.971 03:39:04 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:10:49.971 03:39:04 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:10:49.971 03:39:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:49.971 03:39:04 accel -- common/autotest_common.sh@10 -- # set +x 00:10:49.971 ************************************ 00:10:49.971 START TEST accel_wrong_workload 00:10:49.971 ************************************ 00:10:49.971 03:39:04 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:10:49.971 03:39:04 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:10:49.971 03:39:04 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:10:49.971 03:39:04 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:10:49.971 03:39:04 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:49.971 03:39:04 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:10:49.971 03:39:04 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:49.971 03:39:04 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:10:49.971 03:39:04 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:10:49.971 03:39:04 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:10:49.971 03:39:04 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:49.971 03:39:04 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:49.971 03:39:04 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:49.971 03:39:04 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:49.971 03:39:04 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:49.971 03:39:04 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:10:49.971 03:39:04 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:10:49.971 Unsupported workload type: foobar 00:10:49.971 [2024-07-26 03:39:04.677035] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:10:49.971 accel_perf options: 00:10:49.971 [-h help message] 00:10:49.971 [-q queue depth per core] 00:10:49.971 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:49.971 [-T number of threads per core 00:10:49.971 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:49.971 [-t time in seconds] 00:10:49.971 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:49.971 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:10:49.971 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:49.971 [-l for compress/decompress workloads, name of uncompressed input file 00:10:49.971 [-S for crc32c workload, use this seed value (default 0) 00:10:49.971 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:49.971 [-f for fill workload, use this BYTE value (default 255) 00:10:49.971 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:49.971 [-y verify result if this switch is on] 00:10:49.971 [-a tasks to allocate per core (default: same value as -q)] 00:10:49.971 Can be used to spread operations across a wider range of memory. 00:10:49.971 03:39:04 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:10:49.971 03:39:04 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:49.971 03:39:04 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:49.971 03:39:04 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:49.971 00:10:49.971 real 0m0.072s 00:10:49.971 user 0m0.080s 00:10:49.971 sys 0m0.040s 00:10:49.971 03:39:04 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:49.971 ************************************ 00:10:49.971 END TEST accel_wrong_workload 00:10:49.971 ************************************ 00:10:49.971 03:39:04 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:10:49.971 03:39:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:49.971 03:39:04 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:10:49.971 03:39:04 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:10:49.971 03:39:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:49.971 03:39:04 accel -- common/autotest_common.sh@10 -- # set +x 00:10:49.971 ************************************ 00:10:49.971 START TEST accel_negative_buffers 00:10:49.971 ************************************ 00:10:49.971 03:39:04 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:10:49.971 03:39:04 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:10:49.971 03:39:04 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:10:49.971 03:39:04 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:10:49.971 03:39:04 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:49.971 03:39:04 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:10:49.971 03:39:04 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:49.971 03:39:04 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:10:49.971 03:39:04 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:10:49.971 03:39:04 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:10:49.971 03:39:04 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:49.971 03:39:04 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:49.971 03:39:04 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:49.971 03:39:04 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:49.971 03:39:04 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:49.971 03:39:04 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:10:49.971 03:39:04 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:10:49.971 -x option must be non-negative. 00:10:49.971 [2024-07-26 03:39:04.798766] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:10:49.971 accel_perf options: 00:10:49.971 [-h help message] 00:10:49.971 [-q queue depth per core] 00:10:49.971 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:49.971 [-T number of threads per core 00:10:49.971 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:49.971 [-t time in seconds] 00:10:49.971 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:49.971 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:10:49.971 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:49.971 [-l for compress/decompress workloads, name of uncompressed input file 00:10:49.971 [-S for crc32c workload, use this seed value (default 0) 00:10:49.971 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:49.971 [-f for fill workload, use this BYTE value (default 255) 00:10:49.971 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:49.971 [-y verify result if this switch is on] 00:10:49.971 [-a tasks to allocate per core (default: same value as -q)] 00:10:49.971 Can be used to spread operations across a wider range of memory. 00:10:49.971 03:39:04 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:10:49.971 03:39:04 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:49.971 03:39:04 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:49.971 03:39:04 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:49.971 00:10:49.971 real 0m0.077s 00:10:49.971 user 0m0.084s 00:10:49.971 sys 0m0.038s 00:10:49.971 ************************************ 00:10:49.971 END TEST accel_negative_buffers 00:10:49.971 ************************************ 00:10:49.971 03:39:04 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:49.971 03:39:04 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:10:49.971 03:39:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:49.971 03:39:04 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:10:49.971 03:39:04 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:10:49.971 03:39:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:49.971 03:39:04 accel -- common/autotest_common.sh@10 -- # set +x 00:10:49.971 ************************************ 00:10:49.971 START TEST accel_crc32c 00:10:49.971 ************************************ 00:10:49.971 03:39:04 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:10:49.971 03:39:04 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:10:49.971 03:39:04 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:10:50.231 03:39:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:50.231 03:39:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:50.231 03:39:04 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:50.231 03:39:04 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:10:50.231 03:39:04 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:10:50.231 03:39:04 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:50.231 03:39:04 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:50.231 03:39:04 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:50.231 03:39:04 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:50.231 03:39:04 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:50.231 03:39:04 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:10:50.231 03:39:04 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:10:50.231 [2024-07-26 03:39:04.918631] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:10:50.231 [2024-07-26 03:39:04.918799] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65534 ] 00:10:50.231 [2024-07-26 03:39:05.081364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.536 [2024-07-26 03:39:05.271464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.794 03:39:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:50.794 03:39:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:50.794 03:39:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:50.794 03:39:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:50.794 03:39:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:50.794 03:39:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:50.794 03:39:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:50.794 03:39:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:50.794 03:39:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:10:50.794 03:39:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:50.794 03:39:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:50.794 03:39:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:50.794 03:39:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:50.794 03:39:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:50.794 03:39:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:50.794 03:39:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:50.794 03:39:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:50.794 03:39:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:50.794 03:39:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:50.794 03:39:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:50.794 03:39:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:10:50.794 03:39:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:50.795 03:39:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:52.698 03:39:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:52.698 03:39:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:52.698 03:39:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:52.698 03:39:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:52.698 03:39:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:52.698 03:39:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:52.698 03:39:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:52.698 03:39:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:52.698 03:39:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:52.698 03:39:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:52.698 03:39:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:52.698 03:39:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:52.698 03:39:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:52.698 03:39:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:52.698 03:39:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:52.698 03:39:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:52.698 03:39:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:52.698 03:39:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:52.698 03:39:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:52.698 03:39:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:52.698 03:39:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:52.698 03:39:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:52.698 03:39:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:52.698 03:39:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:52.698 03:39:07 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:52.698 03:39:07 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:10:52.698 03:39:07 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:52.698 00:10:52.698 real 0m2.453s 00:10:52.698 user 0m2.191s 00:10:52.698 sys 0m0.159s 00:10:52.698 03:39:07 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:52.698 ************************************ 00:10:52.698 END TEST accel_crc32c 00:10:52.698 ************************************ 00:10:52.698 03:39:07 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:10:52.698 03:39:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:52.698 03:39:07 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:10:52.698 03:39:07 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:10:52.698 03:39:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:52.698 03:39:07 accel -- common/autotest_common.sh@10 -- # set +x 00:10:52.698 ************************************ 00:10:52.698 START TEST accel_crc32c_C2 00:10:52.698 ************************************ 00:10:52.698 03:39:07 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:10:52.698 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:10:52.698 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:10:52.698 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:52.698 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:52.698 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:52.698 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:52.698 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:10:52.698 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:52.698 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:52.698 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:52.698 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:52.698 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:52.698 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:10:52.698 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:10:52.698 [2024-07-26 03:39:07.426432] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:10:52.698 [2024-07-26 03:39:07.426665] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65585 ] 00:10:52.698 [2024-07-26 03:39:07.598448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.957 [2024-07-26 03:39:07.788111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.216 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:53.216 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:53.216 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:53.216 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:53.216 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:53.216 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:53.216 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:53.216 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:53.216 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:10:53.216 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:53.216 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:53.216 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:53.216 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:53.216 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:53.216 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:53.217 03:39:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:55.119 03:39:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:55.119 03:39:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:55.119 03:39:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:55.119 03:39:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:55.119 03:39:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:55.119 03:39:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:55.119 03:39:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:55.119 03:39:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:55.119 03:39:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:55.119 03:39:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:55.119 03:39:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:55.119 03:39:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:55.119 03:39:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:55.119 03:39:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:55.119 03:39:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:55.119 03:39:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:55.119 03:39:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:55.119 03:39:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:55.119 03:39:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:55.119 03:39:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:55.119 03:39:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:55.119 03:39:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:55.119 03:39:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:55.119 03:39:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:55.119 03:39:09 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:55.119 03:39:09 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:10:55.119 03:39:09 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:55.119 00:10:55.119 real 0m2.473s 00:10:55.119 user 0m2.214s 00:10:55.119 sys 0m0.156s 00:10:55.119 03:39:09 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:55.119 03:39:09 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:10:55.119 ************************************ 00:10:55.119 END TEST accel_crc32c_C2 00:10:55.119 ************************************ 00:10:55.119 03:39:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:55.119 03:39:09 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:10:55.119 03:39:09 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:10:55.120 03:39:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:55.120 03:39:09 accel -- common/autotest_common.sh@10 -- # set +x 00:10:55.120 ************************************ 00:10:55.120 START TEST accel_copy 00:10:55.120 ************************************ 00:10:55.120 03:39:09 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:10:55.120 03:39:09 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:10:55.120 03:39:09 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:10:55.120 03:39:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:55.120 03:39:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:55.120 03:39:09 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:10:55.120 03:39:09 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:55.120 03:39:09 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:10:55.120 03:39:09 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:55.120 03:39:09 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:55.120 03:39:09 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:55.120 03:39:09 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:55.120 03:39:09 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:55.120 03:39:09 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:10:55.120 03:39:09 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:10:55.120 [2024-07-26 03:39:09.938905] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:10:55.120 [2024-07-26 03:39:09.939081] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65627 ] 00:10:55.379 [2024-07-26 03:39:10.110121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.637 [2024-07-26 03:39:10.299043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:55.637 03:39:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:57.534 03:39:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:57.534 03:39:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:57.534 03:39:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:57.534 03:39:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:57.534 03:39:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:57.534 03:39:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:57.534 03:39:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:57.534 03:39:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:57.534 03:39:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:57.534 03:39:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:57.534 03:39:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:57.534 03:39:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:57.534 03:39:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:57.534 03:39:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:57.534 03:39:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:57.534 03:39:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:57.534 03:39:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:57.534 03:39:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:57.534 03:39:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:57.534 03:39:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:57.534 03:39:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:57.534 03:39:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:57.534 03:39:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:57.534 03:39:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:57.534 03:39:12 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:57.534 03:39:12 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:10:57.534 03:39:12 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:57.534 00:10:57.534 real 0m2.475s 00:10:57.534 user 0m2.227s 00:10:57.534 sys 0m0.145s 00:10:57.534 03:39:12 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:57.534 03:39:12 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:10:57.534 ************************************ 00:10:57.534 END TEST accel_copy 00:10:57.534 ************************************ 00:10:57.534 03:39:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:57.534 03:39:12 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:57.534 03:39:12 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:10:57.534 03:39:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:57.534 03:39:12 accel -- common/autotest_common.sh@10 -- # set +x 00:10:57.534 ************************************ 00:10:57.534 START TEST accel_fill 00:10:57.534 ************************************ 00:10:57.534 03:39:12 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:57.534 03:39:12 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:10:57.534 03:39:12 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:10:57.534 03:39:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:57.534 03:39:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:57.534 03:39:12 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:57.534 03:39:12 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:57.534 03:39:12 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:10:57.534 03:39:12 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:57.534 03:39:12 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:57.534 03:39:12 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:57.534 03:39:12 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:57.534 03:39:12 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:57.534 03:39:12 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:10:57.534 03:39:12 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:10:57.792 [2024-07-26 03:39:12.450632] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:10:57.792 [2024-07-26 03:39:12.450787] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65674 ] 00:10:57.792 [2024-07-26 03:39:12.613241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.050 [2024-07-26 03:39:12.855796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.308 03:39:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:58.308 03:39:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:58.308 03:39:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:58.308 03:39:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:58.308 03:39:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:58.308 03:39:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:58.308 03:39:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:58.308 03:39:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:58.308 03:39:13 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:10:58.308 03:39:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:58.308 03:39:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:58.308 03:39:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:58.308 03:39:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:58.308 03:39:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:58.308 03:39:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:58.308 03:39:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:58.308 03:39:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:58.308 03:39:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:58.308 03:39:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:58.308 03:39:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:58.308 03:39:13 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:10:58.308 03:39:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:58.308 03:39:13 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:58.309 03:39:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:00.207 03:39:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:00.207 03:39:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:00.207 03:39:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:00.207 03:39:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:00.207 03:39:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:00.207 03:39:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:00.207 03:39:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:00.207 03:39:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:00.207 03:39:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:00.207 03:39:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:00.207 03:39:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:00.207 03:39:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:00.207 03:39:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:00.207 03:39:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:00.207 03:39:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:00.207 03:39:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:00.207 03:39:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:00.207 03:39:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:00.207 03:39:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:00.207 03:39:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:00.207 03:39:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:00.207 03:39:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:00.207 03:39:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:00.207 03:39:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:00.207 03:39:14 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:00.207 03:39:14 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:11:00.207 03:39:14 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:00.207 00:11:00.207 real 0m2.520s 00:11:00.207 user 0m2.264s 00:11:00.207 sys 0m0.153s 00:11:00.207 03:39:14 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:00.207 03:39:14 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:11:00.207 ************************************ 00:11:00.207 END TEST accel_fill 00:11:00.207 ************************************ 00:11:00.207 03:39:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:00.207 03:39:14 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:11:00.207 03:39:14 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:11:00.207 03:39:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:00.207 03:39:14 accel -- common/autotest_common.sh@10 -- # set +x 00:11:00.207 ************************************ 00:11:00.207 START TEST accel_copy_crc32c 00:11:00.207 ************************************ 00:11:00.207 03:39:14 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:11:00.207 03:39:14 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:11:00.207 03:39:14 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:11:00.207 03:39:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:00.207 03:39:14 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:11:00.207 03:39:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:00.207 03:39:14 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:11:00.207 03:39:14 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:11:00.207 03:39:14 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:00.207 03:39:14 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:00.207 03:39:14 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:00.207 03:39:14 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:00.207 03:39:14 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:00.207 03:39:14 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:11:00.207 03:39:14 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:11:00.207 [2024-07-26 03:39:15.021592] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:11:00.207 [2024-07-26 03:39:15.021810] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65720 ] 00:11:00.464 [2024-07-26 03:39:15.200906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.722 [2024-07-26 03:39:15.399497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.722 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:00.722 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:00.722 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:00.722 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:00.722 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:00.722 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:00.722 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:00.722 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:00.722 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:11:00.722 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:00.722 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:00.723 03:39:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:02.630 03:39:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:02.630 03:39:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:02.630 03:39:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:02.630 03:39:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:02.630 03:39:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:02.630 03:39:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:02.630 03:39:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:02.630 03:39:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:02.630 03:39:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:02.630 03:39:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:02.630 03:39:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:02.630 03:39:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:02.630 03:39:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:02.630 03:39:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:02.630 03:39:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:02.630 03:39:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:02.630 03:39:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:02.630 03:39:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:02.630 03:39:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:02.630 03:39:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:02.631 03:39:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:02.631 03:39:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:02.631 03:39:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:02.631 03:39:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:02.631 03:39:17 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:02.631 03:39:17 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:11:02.631 03:39:17 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:02.631 00:11:02.631 real 0m2.506s 00:11:02.631 user 0m2.249s 00:11:02.631 sys 0m0.154s 00:11:02.631 03:39:17 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:02.631 03:39:17 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:11:02.631 ************************************ 00:11:02.631 END TEST accel_copy_crc32c 00:11:02.631 ************************************ 00:11:02.631 03:39:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:02.631 03:39:17 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:11:02.631 03:39:17 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:11:02.631 03:39:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:02.631 03:39:17 accel -- common/autotest_common.sh@10 -- # set +x 00:11:02.631 ************************************ 00:11:02.631 START TEST accel_copy_crc32c_C2 00:11:02.631 ************************************ 00:11:02.631 03:39:17 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:11:02.631 03:39:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:11:02.631 03:39:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:11:02.631 03:39:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:02.631 03:39:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:02.631 03:39:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:11:02.631 03:39:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:11:02.631 03:39:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:11:02.631 03:39:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:02.631 03:39:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:02.631 03:39:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:02.631 03:39:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:02.631 03:39:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:02.631 03:39:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:11:02.631 03:39:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:11:02.889 [2024-07-26 03:39:17.561524] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:11:02.889 [2024-07-26 03:39:17.561683] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65767 ] 00:11:02.889 [2024-07-26 03:39:17.723517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.147 [2024-07-26 03:39:17.910884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:11:03.405 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:03.406 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:11:03.406 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:03.406 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:03.406 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:11:03.406 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:03.406 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:03.406 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:03.406 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:11:03.406 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:03.406 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:03.406 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:03.406 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:11:03.406 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:03.406 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:03.406 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:03.406 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:11:03.406 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:03.406 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:03.406 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:03.406 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:11:03.406 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:03.406 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:03.406 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:03.406 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:03.406 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:03.406 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:03.406 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:03.406 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:03.406 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:03.406 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:03.406 03:39:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:05.306 03:39:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:05.306 03:39:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:05.306 03:39:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:05.306 03:39:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:05.306 03:39:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:05.306 03:39:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:05.306 03:39:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:05.306 03:39:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:05.306 03:39:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:05.306 03:39:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:05.306 03:39:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:05.306 03:39:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:05.306 03:39:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:05.306 03:39:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:05.306 03:39:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:05.306 03:39:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:05.306 03:39:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:05.306 03:39:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:05.306 03:39:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:05.306 03:39:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:05.306 03:39:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:05.306 03:39:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:05.306 03:39:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:05.306 03:39:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:05.306 03:39:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:05.306 03:39:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:11:05.306 03:39:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:05.306 00:11:05.306 real 0m2.489s 00:11:05.306 user 0m2.249s 00:11:05.306 sys 0m0.132s 00:11:05.306 03:39:20 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:05.306 03:39:20 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:11:05.306 ************************************ 00:11:05.306 END TEST accel_copy_crc32c_C2 00:11:05.306 ************************************ 00:11:05.306 03:39:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:05.306 03:39:20 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:11:05.306 03:39:20 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:11:05.306 03:39:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:05.306 03:39:20 accel -- common/autotest_common.sh@10 -- # set +x 00:11:05.306 ************************************ 00:11:05.306 START TEST accel_dualcast 00:11:05.306 ************************************ 00:11:05.306 03:39:20 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:11:05.306 03:39:20 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:11:05.306 03:39:20 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:11:05.306 03:39:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:05.306 03:39:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:05.306 03:39:20 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:11:05.306 03:39:20 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:11:05.306 03:39:20 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:11:05.306 03:39:20 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:05.306 03:39:20 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:05.306 03:39:20 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:05.306 03:39:20 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:05.306 03:39:20 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:05.306 03:39:20 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:11:05.306 03:39:20 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:11:05.306 [2024-07-26 03:39:20.089665] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:11:05.306 [2024-07-26 03:39:20.089871] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65808 ] 00:11:05.564 [2024-07-26 03:39:20.251496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.564 [2024-07-26 03:39:20.456319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:05.823 03:39:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:07.726 03:39:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:07.726 03:39:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:07.726 03:39:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:07.726 03:39:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:07.726 03:39:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:07.726 03:39:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:07.726 03:39:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:07.726 03:39:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:07.726 03:39:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:07.726 03:39:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:07.726 03:39:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:07.726 03:39:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:07.726 03:39:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:07.726 03:39:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:07.726 03:39:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:07.726 03:39:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:07.726 03:39:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:07.726 03:39:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:07.726 03:39:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:07.726 03:39:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:07.726 03:39:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:07.726 03:39:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:07.726 03:39:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:07.726 03:39:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:07.726 03:39:22 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:07.726 03:39:22 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:11:07.726 03:39:22 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:07.726 00:11:07.726 real 0m2.499s 00:11:07.726 user 0m2.237s 00:11:07.726 sys 0m0.160s 00:11:07.726 03:39:22 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:07.726 03:39:22 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:11:07.726 ************************************ 00:11:07.726 END TEST accel_dualcast 00:11:07.726 ************************************ 00:11:07.726 03:39:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:07.726 03:39:22 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:11:07.726 03:39:22 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:11:07.726 03:39:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:07.726 03:39:22 accel -- common/autotest_common.sh@10 -- # set +x 00:11:07.726 ************************************ 00:11:07.726 START TEST accel_compare 00:11:07.726 ************************************ 00:11:07.726 03:39:22 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:11:07.726 03:39:22 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:11:07.726 03:39:22 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:11:07.726 03:39:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:07.726 03:39:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:07.726 03:39:22 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:11:07.726 03:39:22 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:11:07.726 03:39:22 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:11:07.726 03:39:22 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:07.726 03:39:22 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:07.726 03:39:22 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:07.726 03:39:22 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:07.726 03:39:22 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:07.726 03:39:22 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:11:07.726 03:39:22 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:11:07.726 [2024-07-26 03:39:22.625143] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:11:07.726 [2024-07-26 03:39:22.625308] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65860 ] 00:11:08.027 [2024-07-26 03:39:22.786640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.286 [2024-07-26 03:39:22.976446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:08.286 03:39:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:10.186 03:39:25 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:10.186 03:39:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:10.186 03:39:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:10.186 03:39:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:10.186 03:39:25 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:10.186 03:39:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:10.186 03:39:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:10.186 03:39:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:10.186 03:39:25 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:10.186 03:39:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:10.186 03:39:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:10.186 03:39:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:10.186 03:39:25 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:10.186 03:39:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:10.186 03:39:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:10.186 03:39:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:10.186 03:39:25 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:10.186 03:39:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:10.186 03:39:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:10.186 03:39:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:10.186 03:39:25 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:10.186 03:39:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:10.186 03:39:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:10.186 03:39:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:10.186 03:39:25 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:10.186 03:39:25 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:11:10.186 03:39:25 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:10.186 00:11:10.186 real 0m2.446s 00:11:10.186 user 0m2.217s 00:11:10.186 sys 0m0.129s 00:11:10.186 03:39:25 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:10.186 03:39:25 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:11:10.186 ************************************ 00:11:10.186 END TEST accel_compare 00:11:10.186 ************************************ 00:11:10.186 03:39:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:10.186 03:39:25 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:11:10.186 03:39:25 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:11:10.186 03:39:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:10.186 03:39:25 accel -- common/autotest_common.sh@10 -- # set +x 00:11:10.187 ************************************ 00:11:10.187 START TEST accel_xor 00:11:10.187 ************************************ 00:11:10.187 03:39:25 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:11:10.187 03:39:25 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:11:10.187 03:39:25 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:11:10.187 03:39:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:10.187 03:39:25 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:11:10.187 03:39:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:10.187 03:39:25 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:11:10.187 03:39:25 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:11:10.187 03:39:25 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:10.187 03:39:25 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:10.187 03:39:25 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:10.187 03:39:25 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:10.187 03:39:25 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:10.187 03:39:25 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:11:10.187 03:39:25 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:11:10.445 [2024-07-26 03:39:25.127132] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:11:10.445 [2024-07-26 03:39:25.127349] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65901 ] 00:11:10.445 [2024-07-26 03:39:25.329740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.738 [2024-07-26 03:39:25.551083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:10.996 03:39:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:12.895 00:11:12.895 real 0m2.556s 00:11:12.895 user 0m0.012s 00:11:12.895 sys 0m0.005s 00:11:12.895 03:39:27 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:12.895 03:39:27 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:11:12.895 ************************************ 00:11:12.895 END TEST accel_xor 00:11:12.895 ************************************ 00:11:12.895 03:39:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:12.895 03:39:27 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:11:12.895 03:39:27 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:11:12.895 03:39:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:12.895 03:39:27 accel -- common/autotest_common.sh@10 -- # set +x 00:11:12.895 ************************************ 00:11:12.895 START TEST accel_xor 00:11:12.895 ************************************ 00:11:12.895 03:39:27 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:11:12.895 03:39:27 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:11:12.895 [2024-07-26 03:39:27.726305] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:11:12.895 [2024-07-26 03:39:27.726530] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65948 ] 00:11:13.153 [2024-07-26 03:39:27.896562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.412 [2024-07-26 03:39:28.158360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:13.671 03:39:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:15.573 03:39:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:15.573 03:39:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:15.573 03:39:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:15.573 03:39:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:15.573 03:39:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:15.573 03:39:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:15.573 03:39:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:15.573 03:39:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:15.573 03:39:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:15.573 03:39:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:15.573 03:39:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:15.573 03:39:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:15.573 03:39:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:15.573 03:39:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:15.573 03:39:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:15.573 03:39:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:15.573 03:39:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:15.573 03:39:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:15.573 03:39:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:15.573 03:39:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:15.573 03:39:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:15.573 03:39:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:15.573 03:39:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:15.573 03:39:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:15.573 03:39:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:15.573 03:39:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:11:15.573 03:39:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:15.573 00:11:15.573 real 0m2.559s 00:11:15.573 user 0m2.288s 00:11:15.573 sys 0m0.165s 00:11:15.573 03:39:30 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:15.573 ************************************ 00:11:15.573 END TEST accel_xor 00:11:15.573 ************************************ 00:11:15.573 03:39:30 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:11:15.573 03:39:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:15.573 03:39:30 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:11:15.573 03:39:30 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:11:15.573 03:39:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:15.573 03:39:30 accel -- common/autotest_common.sh@10 -- # set +x 00:11:15.573 ************************************ 00:11:15.573 START TEST accel_dif_verify 00:11:15.573 ************************************ 00:11:15.573 03:39:30 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:11:15.573 03:39:30 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:11:15.573 03:39:30 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:11:15.573 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:15.573 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:15.573 03:39:30 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:11:15.573 03:39:30 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:11:15.573 03:39:30 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:11:15.573 03:39:30 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:15.573 03:39:30 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:15.573 03:39:30 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:15.573 03:39:30 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:15.573 03:39:30 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:15.573 03:39:30 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:11:15.573 03:39:30 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:11:15.573 [2024-07-26 03:39:30.331897] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:11:15.573 [2024-07-26 03:39:30.332114] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65994 ] 00:11:15.832 [2024-07-26 03:39:30.506528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.090 [2024-07-26 03:39:30.766189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.090 03:39:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:16.090 03:39:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:16.090 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:16.091 03:39:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:11:16.349 03:39:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:16.349 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:16.349 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:16.349 03:39:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:16.349 03:39:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:16.349 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:16.349 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:16.349 03:39:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:16.349 03:39:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:16.349 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:16.349 03:39:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:18.249 03:39:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:18.249 03:39:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:18.249 03:39:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:18.249 03:39:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:18.249 03:39:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:18.249 03:39:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:18.249 03:39:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:18.249 03:39:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:18.249 03:39:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:18.249 03:39:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:18.249 03:39:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:18.249 03:39:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:18.249 03:39:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:18.249 03:39:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:18.249 03:39:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:18.249 03:39:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:18.249 03:39:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:18.249 03:39:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:18.249 03:39:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:18.249 03:39:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:18.249 03:39:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:18.249 03:39:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:18.249 03:39:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:18.249 03:39:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:18.249 03:39:32 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:18.249 03:39:32 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:11:18.249 03:39:32 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:18.249 00:11:18.249 real 0m2.647s 00:11:18.249 user 0m0.017s 00:11:18.249 sys 0m0.005s 00:11:18.249 03:39:32 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:18.249 03:39:32 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:11:18.249 ************************************ 00:11:18.249 END TEST accel_dif_verify 00:11:18.249 ************************************ 00:11:18.249 03:39:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:18.249 03:39:32 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:11:18.249 03:39:32 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:11:18.249 03:39:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:18.249 03:39:32 accel -- common/autotest_common.sh@10 -- # set +x 00:11:18.249 ************************************ 00:11:18.249 START TEST accel_dif_generate 00:11:18.249 ************************************ 00:11:18.249 03:39:32 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:11:18.249 03:39:32 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:11:18.249 03:39:32 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:11:18.249 03:39:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:18.249 03:39:32 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:11:18.249 03:39:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:18.249 03:39:32 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:11:18.249 03:39:32 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:11:18.249 03:39:32 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:18.249 03:39:32 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:18.249 03:39:32 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:18.249 03:39:32 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:18.249 03:39:32 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:18.249 03:39:32 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:11:18.249 03:39:32 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:11:18.249 [2024-07-26 03:39:33.006396] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:11:18.249 [2024-07-26 03:39:33.006592] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66041 ] 00:11:18.507 [2024-07-26 03:39:33.169561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.507 [2024-07-26 03:39:33.362205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:18.779 03:39:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:20.677 03:39:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:20.677 03:39:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:20.677 03:39:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:20.677 03:39:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:20.677 03:39:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:20.677 03:39:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:20.677 03:39:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:20.677 03:39:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:20.677 03:39:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:20.677 03:39:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:20.677 03:39:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:20.677 03:39:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:20.677 03:39:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:20.677 03:39:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:20.677 03:39:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:20.677 03:39:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:20.677 03:39:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:20.677 03:39:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:20.677 03:39:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:20.678 03:39:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:20.678 03:39:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:20.678 03:39:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:20.678 03:39:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:20.678 03:39:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:20.678 03:39:35 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:20.678 03:39:35 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:11:20.678 03:39:35 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:20.678 00:11:20.678 real 0m2.464s 00:11:20.678 user 0m2.220s 00:11:20.678 sys 0m0.144s 00:11:20.678 03:39:35 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:20.678 03:39:35 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:11:20.678 ************************************ 00:11:20.678 END TEST accel_dif_generate 00:11:20.678 ************************************ 00:11:20.678 03:39:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:20.678 03:39:35 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:11:20.678 03:39:35 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:11:20.678 03:39:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:20.678 03:39:35 accel -- common/autotest_common.sh@10 -- # set +x 00:11:20.678 ************************************ 00:11:20.678 START TEST accel_dif_generate_copy 00:11:20.678 ************************************ 00:11:20.678 03:39:35 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:11:20.678 03:39:35 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:11:20.678 03:39:35 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:11:20.678 03:39:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:20.678 03:39:35 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:11:20.678 03:39:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:20.678 03:39:35 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:11:20.678 03:39:35 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:11:20.678 03:39:35 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:20.678 03:39:35 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:20.678 03:39:35 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:20.678 03:39:35 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:20.678 03:39:35 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:20.678 03:39:35 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:11:20.678 03:39:35 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:11:20.678 [2024-07-26 03:39:35.529934] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:11:20.678 [2024-07-26 03:39:35.530168] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66087 ] 00:11:20.936 [2024-07-26 03:39:35.713731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.194 [2024-07-26 03:39:35.918971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.452 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:21.452 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:21.452 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:21.452 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:21.452 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:21.452 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:21.452 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:21.452 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:21.452 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:11:21.452 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:21.452 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:21.452 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:21.452 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:21.452 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:21.452 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:21.452 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:21.452 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:21.452 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:21.452 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:21.452 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:21.452 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:11:21.452 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:21.452 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:11:21.452 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:21.452 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:21.452 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:21.452 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:21.452 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:21.452 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:21.452 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:21.452 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:21.452 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:21.452 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:21.452 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:21.452 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:21.453 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:21.453 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:21.453 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:11:21.453 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:21.453 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:11:21.453 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:21.453 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:21.453 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:11:21.453 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:21.453 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:21.453 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:21.453 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:11:21.453 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:21.453 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:21.453 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:21.453 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:11:21.453 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:21.453 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:21.453 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:21.453 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:11:21.453 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:21.453 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:21.453 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:21.453 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:11:21.453 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:21.453 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:21.453 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:21.453 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:21.453 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:21.453 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:21.453 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:21.453 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:21.453 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:21.453 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:21.453 03:39:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:23.355 03:39:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:23.355 03:39:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:23.355 03:39:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:23.355 03:39:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:23.355 03:39:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:23.355 03:39:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:23.355 03:39:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:23.355 03:39:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:23.355 03:39:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:23.355 03:39:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:23.355 03:39:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:23.355 03:39:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:23.355 03:39:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:23.355 03:39:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:23.355 03:39:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:23.355 03:39:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:23.356 03:39:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:23.356 03:39:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:23.356 03:39:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:23.356 03:39:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:23.356 03:39:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:23.356 03:39:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:23.356 03:39:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:23.356 03:39:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:23.356 03:39:37 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:23.356 03:39:37 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:11:23.356 03:39:37 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:23.356 00:11:23.356 real 0m2.520s 00:11:23.356 user 0m2.249s 00:11:23.356 sys 0m0.167s 00:11:23.356 03:39:37 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:23.356 ************************************ 00:11:23.356 03:39:37 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:11:23.356 END TEST accel_dif_generate_copy 00:11:23.356 ************************************ 00:11:23.356 03:39:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:23.356 03:39:38 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:11:23.356 03:39:38 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:23.356 03:39:38 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:11:23.356 03:39:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:23.356 03:39:38 accel -- common/autotest_common.sh@10 -- # set +x 00:11:23.356 ************************************ 00:11:23.356 START TEST accel_comp 00:11:23.356 ************************************ 00:11:23.356 03:39:38 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:23.356 03:39:38 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:11:23.356 03:39:38 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:11:23.356 03:39:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:23.356 03:39:38 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:23.356 03:39:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:23.356 03:39:38 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:23.356 03:39:38 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:11:23.356 03:39:38 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:23.356 03:39:38 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:23.356 03:39:38 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:23.356 03:39:38 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:23.356 03:39:38 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:23.356 03:39:38 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:11:23.356 03:39:38 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:11:23.356 [2024-07-26 03:39:38.079322] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:11:23.356 [2024-07-26 03:39:38.079502] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66134 ] 00:11:23.356 [2024-07-26 03:39:38.249386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.614 [2024-07-26 03:39:38.486095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.872 03:39:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:23.872 03:39:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:23.872 03:39:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:23.872 03:39:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:23.872 03:39:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:23.872 03:39:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:23.872 03:39:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:23.872 03:39:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:23.872 03:39:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:23.872 03:39:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:23.872 03:39:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:23.872 03:39:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:23.872 03:39:38 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:11:23.872 03:39:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:23.872 03:39:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:23.872 03:39:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:23.872 03:39:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:23.872 03:39:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:23.872 03:39:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:23.872 03:39:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:23.872 03:39:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:23.872 03:39:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:23.872 03:39:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:23.872 03:39:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:23.872 03:39:38 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:11:23.872 03:39:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:23.872 03:39:38 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:11:23.872 03:39:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:23.872 03:39:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:23.872 03:39:38 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:23.872 03:39:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:23.872 03:39:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:23.872 03:39:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:23.872 03:39:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:23.872 03:39:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:23.873 03:39:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:23.873 03:39:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:23.873 03:39:38 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:11:23.873 03:39:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:23.873 03:39:38 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:11:23.873 03:39:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:23.873 03:39:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:23.873 03:39:38 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:23.873 03:39:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:23.873 03:39:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:23.873 03:39:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:23.873 03:39:38 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:11:23.873 03:39:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:23.873 03:39:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:23.873 03:39:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:23.873 03:39:38 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:11:23.873 03:39:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:23.873 03:39:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:23.873 03:39:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:23.873 03:39:38 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:11:23.873 03:39:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:23.873 03:39:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:23.873 03:39:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:23.873 03:39:38 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:11:23.873 03:39:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:23.873 03:39:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:23.873 03:39:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:23.873 03:39:38 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:11:23.873 03:39:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:23.873 03:39:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:23.873 03:39:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:23.873 03:39:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:23.873 03:39:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:23.873 03:39:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:23.873 03:39:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:23.873 03:39:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:23.873 03:39:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:23.873 03:39:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:23.873 03:39:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:25.889 03:39:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:25.889 03:39:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:25.889 03:39:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:25.889 03:39:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:25.889 03:39:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:25.889 03:39:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:25.889 03:39:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:25.889 03:39:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:25.889 03:39:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:25.889 03:39:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:25.889 03:39:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:25.889 03:39:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:25.889 03:39:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:25.889 03:39:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:25.889 03:39:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:25.889 03:39:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:25.889 03:39:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:25.889 03:39:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:25.889 03:39:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:25.889 03:39:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:25.889 03:39:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:25.889 03:39:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:25.889 03:39:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:25.889 03:39:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:25.889 03:39:40 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:25.889 03:39:40 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:11:25.889 03:39:40 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:25.889 00:11:25.889 real 0m2.520s 00:11:25.889 user 0m0.016s 00:11:25.889 sys 0m0.001s 00:11:25.889 03:39:40 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:25.889 03:39:40 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:11:25.889 ************************************ 00:11:25.889 END TEST accel_comp 00:11:25.889 ************************************ 00:11:25.889 03:39:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:25.889 03:39:40 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:25.889 03:39:40 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:11:25.889 03:39:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:25.889 03:39:40 accel -- common/autotest_common.sh@10 -- # set +x 00:11:25.889 ************************************ 00:11:25.889 START TEST accel_decomp 00:11:25.889 ************************************ 00:11:25.889 03:39:40 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:25.889 03:39:40 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:11:25.889 03:39:40 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:11:25.889 03:39:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:25.889 03:39:40 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:25.889 03:39:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:25.889 03:39:40 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:25.889 03:39:40 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:11:25.889 03:39:40 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:25.889 03:39:40 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:25.889 03:39:40 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:25.889 03:39:40 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:25.889 03:39:40 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:25.889 03:39:40 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:11:25.889 03:39:40 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:11:25.889 [2024-07-26 03:39:40.637354] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:11:25.889 [2024-07-26 03:39:40.637502] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66175 ] 00:11:26.147 [2024-07-26 03:39:40.798374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.147 [2024-07-26 03:39:41.035009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.405 03:39:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:26.405 03:39:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:26.405 03:39:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:26.405 03:39:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:26.405 03:39:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:26.405 03:39:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:26.405 03:39:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:26.405 03:39:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:26.405 03:39:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:26.405 03:39:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:26.405 03:39:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:26.405 03:39:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:26.406 03:39:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:28.307 03:39:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:28.307 03:39:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:28.307 03:39:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:28.307 03:39:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:28.307 03:39:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:28.307 03:39:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:28.307 03:39:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:28.307 03:39:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:28.307 03:39:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:28.307 03:39:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:28.307 03:39:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:28.307 03:39:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:28.307 03:39:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:28.307 03:39:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:28.307 03:39:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:28.307 03:39:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:28.307 03:39:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:28.307 03:39:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:28.307 03:39:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:28.307 03:39:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:28.307 03:39:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:28.307 03:39:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:28.307 03:39:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:28.307 03:39:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:28.307 03:39:43 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:28.307 03:39:43 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:11:28.307 03:39:43 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:28.307 00:11:28.307 real 0m2.543s 00:11:28.307 user 0m0.013s 00:11:28.307 sys 0m0.003s 00:11:28.307 03:39:43 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:28.307 ************************************ 00:11:28.307 END TEST accel_decomp 00:11:28.307 ************************************ 00:11:28.307 03:39:43 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:11:28.307 03:39:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:28.307 03:39:43 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:28.307 03:39:43 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:11:28.308 03:39:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:28.308 03:39:43 accel -- common/autotest_common.sh@10 -- # set +x 00:11:28.308 ************************************ 00:11:28.308 START TEST accel_decomp_full 00:11:28.308 ************************************ 00:11:28.308 03:39:43 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:28.308 03:39:43 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:11:28.308 03:39:43 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:11:28.308 03:39:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:28.308 03:39:43 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:28.308 03:39:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:28.308 03:39:43 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:28.308 03:39:43 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:11:28.308 03:39:43 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:28.308 03:39:43 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:28.308 03:39:43 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:28.308 03:39:43 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:28.308 03:39:43 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:28.308 03:39:43 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:11:28.308 03:39:43 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:11:28.566 [2024-07-26 03:39:43.241905] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:11:28.566 [2024-07-26 03:39:43.242140] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66227 ] 00:11:28.566 [2024-07-26 03:39:43.425068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.825 [2024-07-26 03:39:43.693495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:29.083 03:39:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:31.012 03:39:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:31.012 03:39:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:31.012 03:39:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:31.012 03:39:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:31.012 03:39:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:31.012 03:39:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:31.012 03:39:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:31.012 03:39:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:31.012 03:39:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:31.012 03:39:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:31.012 03:39:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:31.012 03:39:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:31.012 03:39:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:31.012 03:39:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:31.012 03:39:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:31.012 03:39:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:31.012 03:39:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:31.012 03:39:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:31.012 03:39:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:31.012 03:39:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:31.012 03:39:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:31.012 03:39:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:31.012 03:39:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:31.012 03:39:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:31.012 03:39:45 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:31.012 03:39:45 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:11:31.012 03:39:45 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:31.012 00:11:31.012 real 0m2.628s 00:11:31.012 user 0m2.369s 00:11:31.012 sys 0m0.154s 00:11:31.012 03:39:45 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:31.012 03:39:45 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:11:31.012 ************************************ 00:11:31.012 END TEST accel_decomp_full 00:11:31.013 ************************************ 00:11:31.013 03:39:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:31.013 03:39:45 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:31.013 03:39:45 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:11:31.013 03:39:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:31.013 03:39:45 accel -- common/autotest_common.sh@10 -- # set +x 00:11:31.013 ************************************ 00:11:31.013 START TEST accel_decomp_mcore 00:11:31.013 ************************************ 00:11:31.013 03:39:45 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:31.013 03:39:45 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:11:31.013 03:39:45 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:11:31.013 03:39:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:31.013 03:39:45 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:31.013 03:39:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:31.013 03:39:45 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:31.013 03:39:45 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:11:31.013 03:39:45 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:31.013 03:39:45 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:31.013 03:39:45 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:31.013 03:39:45 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:31.013 03:39:45 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:31.013 03:39:45 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:11:31.013 03:39:45 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:11:31.315 [2024-07-26 03:39:45.913881] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:11:31.315 [2024-07-26 03:39:45.914167] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66268 ] 00:11:31.315 [2024-07-26 03:39:46.098689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:31.572 [2024-07-26 03:39:46.334863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:31.572 [2024-07-26 03:39:46.334935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:31.572 [2024-07-26 03:39:46.335030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.572 [2024-07-26 03:39:46.335032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:31.830 03:39:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:33.729 00:11:33.729 real 0m2.687s 00:11:33.729 user 0m7.562s 00:11:33.729 sys 0m0.191s 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:33.729 03:39:48 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:11:33.729 ************************************ 00:11:33.729 END TEST accel_decomp_mcore 00:11:33.729 ************************************ 00:11:33.729 03:39:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:33.729 03:39:48 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:33.729 03:39:48 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:11:33.729 03:39:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:33.729 03:39:48 accel -- common/autotest_common.sh@10 -- # set +x 00:11:33.729 ************************************ 00:11:33.729 START TEST accel_decomp_full_mcore 00:11:33.729 ************************************ 00:11:33.729 03:39:48 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:33.729 03:39:48 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:11:33.729 03:39:48 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:11:33.729 03:39:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:33.729 03:39:48 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:33.729 03:39:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:33.729 03:39:48 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:33.729 03:39:48 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:11:33.729 03:39:48 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:33.729 03:39:48 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:33.729 03:39:48 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:33.729 03:39:48 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:33.729 03:39:48 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:33.729 03:39:48 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:11:33.729 03:39:48 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:11:33.987 [2024-07-26 03:39:48.637435] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:11:33.987 [2024-07-26 03:39:48.637652] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66323 ] 00:11:33.987 [2024-07-26 03:39:48.821080] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:34.245 [2024-07-26 03:39:49.069493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.245 [2024-07-26 03:39:49.069587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:34.245 [2024-07-26 03:39:49.069686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.245 [2024-07-26 03:39:49.069690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:34.504 03:39:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:36.405 03:39:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:36.405 03:39:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:36.405 03:39:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:36.405 03:39:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:36.405 03:39:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:36.405 03:39:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:36.405 03:39:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:36.405 03:39:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:36.405 03:39:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:36.405 03:39:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:36.405 03:39:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:36.405 03:39:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:36.405 03:39:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:36.405 03:39:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:36.405 03:39:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:36.405 03:39:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:36.405 03:39:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:36.406 03:39:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:36.406 03:39:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:36.406 03:39:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:36.406 03:39:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:36.406 03:39:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:36.406 03:39:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:36.406 03:39:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:36.406 03:39:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:36.406 03:39:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:36.406 03:39:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:36.406 03:39:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:36.406 03:39:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:36.406 03:39:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:36.406 03:39:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:36.406 03:39:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:36.406 03:39:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:36.406 03:39:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:36.406 03:39:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:36.406 03:39:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:36.406 03:39:51 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:36.406 03:39:51 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:11:36.406 03:39:51 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:36.406 00:11:36.406 real 0m2.621s 00:11:36.406 user 0m0.022s 00:11:36.406 sys 0m0.004s 00:11:36.406 03:39:51 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:36.406 03:39:51 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:11:36.406 ************************************ 00:11:36.406 END TEST accel_decomp_full_mcore 00:11:36.406 ************************************ 00:11:36.406 03:39:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:36.406 03:39:51 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:36.406 03:39:51 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:11:36.406 03:39:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:36.406 03:39:51 accel -- common/autotest_common.sh@10 -- # set +x 00:11:36.406 ************************************ 00:11:36.406 START TEST accel_decomp_mthread 00:11:36.406 ************************************ 00:11:36.406 03:39:51 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:36.406 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:11:36.406 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:11:36.406 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:36.406 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:36.406 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:36.406 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:36.406 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:11:36.406 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:36.406 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:36.406 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:36.406 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:36.406 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:36.406 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:11:36.406 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:11:36.406 [2024-07-26 03:39:51.297288] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:11:36.406 [2024-07-26 03:39:51.297493] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66373 ] 00:11:36.663 [2024-07-26 03:39:51.468348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.921 [2024-07-26 03:39:51.702494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:11:37.179 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:37.180 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:37.180 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:37.180 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:11:37.180 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:37.180 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:37.180 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:37.180 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:11:37.180 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:37.180 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:37.180 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:37.180 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:11:37.180 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:37.180 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:37.180 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:37.180 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:37.180 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:37.180 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:37.180 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:37.180 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:37.180 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:37.180 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:37.180 03:39:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:39.104 03:39:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:39.104 03:39:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:39.104 03:39:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:39.104 03:39:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:39.104 03:39:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:39.104 03:39:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:39.104 03:39:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:39.104 03:39:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:39.104 03:39:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:39.104 03:39:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:39.104 03:39:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:39.104 03:39:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:39.104 03:39:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:39.104 03:39:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:39.104 03:39:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:39.104 03:39:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:39.104 03:39:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:39.104 03:39:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:39.104 03:39:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:39.104 03:39:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:39.104 03:39:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:39.104 03:39:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:39.104 03:39:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:39.104 03:39:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:39.104 03:39:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:39.104 03:39:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:39.104 03:39:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:39.104 03:39:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:39.104 03:39:53 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:39.104 03:39:53 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:11:39.104 03:39:53 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:39.104 00:11:39.104 real 0m2.536s 00:11:39.104 user 0m2.293s 00:11:39.104 sys 0m0.149s 00:11:39.104 ************************************ 00:11:39.104 END TEST accel_decomp_mthread 00:11:39.104 ************************************ 00:11:39.104 03:39:53 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:39.104 03:39:53 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:11:39.104 03:39:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:39.104 03:39:53 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:39.104 03:39:53 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:11:39.104 03:39:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:39.104 03:39:53 accel -- common/autotest_common.sh@10 -- # set +x 00:11:39.104 ************************************ 00:11:39.104 START TEST accel_decomp_full_mthread 00:11:39.104 ************************************ 00:11:39.104 03:39:53 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:39.104 03:39:53 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:11:39.104 03:39:53 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:11:39.104 03:39:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:39.104 03:39:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:39.104 03:39:53 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:39.104 03:39:53 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:39.104 03:39:53 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:11:39.104 03:39:53 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:39.104 03:39:53 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:39.104 03:39:53 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:39.104 03:39:53 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:39.104 03:39:53 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:39.104 03:39:53 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:11:39.104 03:39:53 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:11:39.104 [2024-07-26 03:39:53.875832] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:11:39.104 [2024-07-26 03:39:53.876012] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66419 ] 00:11:39.363 [2024-07-26 03:39:54.046673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.622 [2024-07-26 03:39:54.306097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:39.622 03:39:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:41.524 03:39:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:41.524 03:39:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:41.524 03:39:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:41.524 03:39:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:41.524 03:39:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:41.524 03:39:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:41.524 03:39:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:41.524 03:39:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:41.524 03:39:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:41.524 03:39:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:41.524 03:39:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:41.524 03:39:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:41.524 03:39:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:41.524 03:39:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:41.524 03:39:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:41.524 03:39:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:41.524 03:39:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:41.524 03:39:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:41.524 03:39:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:41.524 03:39:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:41.524 03:39:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:41.524 03:39:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:41.524 03:39:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:41.524 03:39:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:41.524 03:39:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:41.524 03:39:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:41.524 03:39:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:41.524 03:39:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:41.524 03:39:56 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:41.524 03:39:56 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:11:41.524 03:39:56 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:41.524 00:11:41.524 real 0m2.589s 00:11:41.524 user 0m2.324s 00:11:41.524 sys 0m0.164s 00:11:41.524 03:39:56 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:41.524 03:39:56 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:11:41.524 ************************************ 00:11:41.524 END TEST accel_decomp_full_mthread 00:11:41.524 ************************************ 00:11:41.783 03:39:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:41.783 03:39:56 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:11:41.783 03:39:56 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:11:41.783 03:39:56 accel -- accel/accel.sh@137 -- # build_accel_config 00:11:41.783 03:39:56 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:41.783 03:39:56 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:41.783 03:39:56 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:41.783 03:39:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:41.783 03:39:56 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:41.783 03:39:56 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:41.783 03:39:56 accel -- common/autotest_common.sh@10 -- # set +x 00:11:41.783 03:39:56 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:41.783 03:39:56 accel -- accel/accel.sh@40 -- # local IFS=, 00:11:41.783 03:39:56 accel -- accel/accel.sh@41 -- # jq -r . 00:11:41.783 ************************************ 00:11:41.783 START TEST accel_dif_functional_tests 00:11:41.783 ************************************ 00:11:41.783 03:39:56 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:11:41.783 [2024-07-26 03:39:56.544515] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:11:41.783 [2024-07-26 03:39:56.544675] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66467 ] 00:11:42.041 [2024-07-26 03:39:56.709219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:42.041 [2024-07-26 03:39:56.906225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.041 [2024-07-26 03:39:56.906295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.041 [2024-07-26 03:39:56.906295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:42.299 00:11:42.299 00:11:42.299 CUnit - A unit testing framework for C - Version 2.1-3 00:11:42.299 http://cunit.sourceforge.net/ 00:11:42.299 00:11:42.299 00:11:42.299 Suite: accel_dif 00:11:42.299 Test: verify: DIF generated, GUARD check ...passed 00:11:42.299 Test: verify: DIF generated, APPTAG check ...passed 00:11:42.299 Test: verify: DIF generated, REFTAG check ...passed 00:11:42.299 Test: verify: DIF not generated, GUARD check ...passed 00:11:42.299 Test: verify: DIF not generated, APPTAG check ...[2024-07-26 03:39:57.189182] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:42.299 passed 00:11:42.299 Test: verify: DIF not generated, REFTAG check ...[2024-07-26 03:39:57.189339] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:42.299 passed 00:11:42.299 Test: verify: APPTAG correct, APPTAG check ...passed 00:11:42.299 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-26 03:39:57.189484] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:42.299 passed[2024-07-26 03:39:57.189648] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:11:42.299 00:11:42.299 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:11:42.299 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:11:42.299 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:11:42.299 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-26 03:39:57.190174] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:11:42.299 passed 00:11:42.299 Test: verify copy: DIF generated, GUARD check ...passed 00:11:42.299 Test: verify copy: DIF generated, APPTAG check ...passed 00:11:42.299 Test: verify copy: DIF generated, REFTAG check ...passed 00:11:42.299 Test: verify copy: DIF not generated, GUARD check ...passed 00:11:42.299 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-26 03:39:57.190638] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:42.300 [2024-07-26 03:39:57.190749] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:42.300 passed 00:11:42.300 Test: verify copy: DIF not generated, REFTAG check ...passed 00:11:42.300 Test: generate copy: DIF generated, GUARD check ...[2024-07-26 03:39:57.190915] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:42.300 passed 00:11:42.300 Test: generate copy: DIF generated, APTTAG check ...passed 00:11:42.300 Test: generate copy: DIF generated, REFTAG check ...passed 00:11:42.300 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:11:42.300 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:11:42.300 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:11:42.300 Test: generate copy: iovecs-len validate ...[2024-07-26 03:39:57.191874] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:11:42.300 passed 00:11:42.300 Test: generate copy: buffer alignment validate ...passed 00:11:42.300 00:11:42.300 Run Summary: Type Total Ran Passed Failed Inactive 00:11:42.300 suites 1 1 n/a 0 0 00:11:42.300 tests 26 26 26 0 0 00:11:42.300 asserts 115 115 115 0 n/a 00:11:42.300 00:11:42.300 Elapsed time = 0.009 seconds 00:11:43.674 00:11:43.675 real 0m1.896s 00:11:43.675 user 0m3.657s 00:11:43.675 sys 0m0.202s 00:11:43.675 03:39:58 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:43.675 03:39:58 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:11:43.675 ************************************ 00:11:43.675 END TEST accel_dif_functional_tests 00:11:43.675 ************************************ 00:11:43.675 03:39:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:43.675 ************************************ 00:11:43.675 END TEST accel 00:11:43.675 ************************************ 00:11:43.675 00:11:43.675 real 1m0.580s 00:11:43.675 user 1m6.369s 00:11:43.675 sys 0m4.845s 00:11:43.675 03:39:58 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:43.675 03:39:58 accel -- common/autotest_common.sh@10 -- # set +x 00:11:43.675 03:39:58 -- common/autotest_common.sh@1142 -- # return 0 00:11:43.675 03:39:58 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:11:43.675 03:39:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:43.675 03:39:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:43.675 03:39:58 -- common/autotest_common.sh@10 -- # set +x 00:11:43.675 ************************************ 00:11:43.675 START TEST accel_rpc 00:11:43.675 ************************************ 00:11:43.675 03:39:58 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:11:43.675 * Looking for test storage... 00:11:43.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:11:43.675 03:39:58 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:43.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.675 03:39:58 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=66549 00:11:43.675 03:39:58 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:11:43.675 03:39:58 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 66549 00:11:43.675 03:39:58 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 66549 ']' 00:11:43.675 03:39:58 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.675 03:39:58 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:43.675 03:39:58 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.675 03:39:58 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:43.675 03:39:58 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.933 [2024-07-26 03:39:58.666659] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:11:43.933 [2024-07-26 03:39:58.667266] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66549 ] 00:11:43.933 [2024-07-26 03:39:58.829474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.192 [2024-07-26 03:39:59.058937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.758 03:39:59 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:44.758 03:39:59 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:44.758 03:39:59 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:11:44.758 03:39:59 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:11:44.758 03:39:59 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:11:44.758 03:39:59 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:11:44.758 03:39:59 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:11:44.758 03:39:59 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:44.758 03:39:59 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:44.758 03:39:59 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.758 ************************************ 00:11:44.758 START TEST accel_assign_opcode 00:11:44.758 ************************************ 00:11:44.758 03:39:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:11:44.758 03:39:59 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:11:44.758 03:39:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.758 03:39:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:11:44.758 [2024-07-26 03:39:59.620068] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:11:44.758 03:39:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.758 03:39:59 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:11:44.758 03:39:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.758 03:39:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:11:44.758 [2024-07-26 03:39:59.628070] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:11:44.758 03:39:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.758 03:39:59 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:11:44.758 03:39:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.758 03:39:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:11:45.693 03:40:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.693 03:40:00 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:11:45.693 03:40:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.693 03:40:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:11:45.693 03:40:00 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:11:45.693 03:40:00 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:11:45.693 03:40:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.693 software 00:11:45.693 00:11:45.693 real 0m0.782s 00:11:45.693 user 0m0.061s 00:11:45.693 sys 0m0.008s 00:11:45.693 03:40:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:45.693 ************************************ 00:11:45.693 END TEST accel_assign_opcode 00:11:45.693 ************************************ 00:11:45.693 03:40:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:11:45.693 03:40:00 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:11:45.693 03:40:00 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 66549 00:11:45.693 03:40:00 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 66549 ']' 00:11:45.693 03:40:00 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 66549 00:11:45.693 03:40:00 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:11:45.693 03:40:00 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:45.693 03:40:00 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66549 00:11:45.693 killing process with pid 66549 00:11:45.693 03:40:00 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:45.693 03:40:00 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:45.693 03:40:00 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66549' 00:11:45.693 03:40:00 accel_rpc -- common/autotest_common.sh@967 -- # kill 66549 00:11:45.693 03:40:00 accel_rpc -- common/autotest_common.sh@972 -- # wait 66549 00:11:48.276 ************************************ 00:11:48.276 END TEST accel_rpc 00:11:48.276 ************************************ 00:11:48.276 00:11:48.276 real 0m4.199s 00:11:48.276 user 0m4.277s 00:11:48.276 sys 0m0.492s 00:11:48.276 03:40:02 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:48.276 03:40:02 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:48.276 03:40:02 -- common/autotest_common.sh@1142 -- # return 0 00:11:48.276 03:40:02 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:48.276 03:40:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:48.276 03:40:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:48.276 03:40:02 -- common/autotest_common.sh@10 -- # set +x 00:11:48.277 ************************************ 00:11:48.277 START TEST app_cmdline 00:11:48.277 ************************************ 00:11:48.277 03:40:02 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:48.277 * Looking for test storage... 00:11:48.277 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:48.277 03:40:02 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:48.277 03:40:02 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=66665 00:11:48.277 03:40:02 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:48.277 03:40:02 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 66665 00:11:48.277 03:40:02 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 66665 ']' 00:11:48.277 03:40:02 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.277 03:40:02 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:48.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.277 03:40:02 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.277 03:40:02 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:48.277 03:40:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:48.277 [2024-07-26 03:40:02.887693] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:11:48.277 [2024-07-26 03:40:02.887874] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66665 ] 00:11:48.277 [2024-07-26 03:40:03.053231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.535 [2024-07-26 03:40:03.290152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.469 03:40:04 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:49.469 03:40:04 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:11:49.469 03:40:04 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:11:49.469 { 00:11:49.469 "version": "SPDK v24.09-pre git sha1 764779691", 00:11:49.469 "fields": { 00:11:49.469 "major": 24, 00:11:49.469 "minor": 9, 00:11:49.469 "patch": 0, 00:11:49.469 "suffix": "-pre", 00:11:49.469 "commit": "764779691" 00:11:49.469 } 00:11:49.469 } 00:11:49.469 03:40:04 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:11:49.469 03:40:04 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:49.469 03:40:04 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:49.469 03:40:04 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:49.469 03:40:04 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:49.469 03:40:04 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:49.469 03:40:04 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.469 03:40:04 app_cmdline -- app/cmdline.sh@26 -- # sort 00:11:49.469 03:40:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:49.469 03:40:04 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.469 03:40:04 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:49.469 03:40:04 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:49.469 03:40:04 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:49.469 03:40:04 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:11:49.469 03:40:04 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:49.469 03:40:04 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:49.469 03:40:04 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:49.469 03:40:04 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:49.469 03:40:04 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:49.469 03:40:04 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:49.469 03:40:04 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:49.469 03:40:04 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:49.469 03:40:04 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:49.469 03:40:04 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:49.728 request: 00:11:49.728 { 00:11:49.728 "method": "env_dpdk_get_mem_stats", 00:11:49.728 "req_id": 1 00:11:49.728 } 00:11:49.728 Got JSON-RPC error response 00:11:49.728 response: 00:11:49.728 { 00:11:49.728 "code": -32601, 00:11:49.728 "message": "Method not found" 00:11:49.728 } 00:11:49.728 03:40:04 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:11:49.728 03:40:04 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:49.728 03:40:04 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:49.728 03:40:04 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:49.728 03:40:04 app_cmdline -- app/cmdline.sh@1 -- # killprocess 66665 00:11:49.728 03:40:04 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 66665 ']' 00:11:49.728 03:40:04 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 66665 00:11:49.728 03:40:04 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:11:49.728 03:40:04 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:49.728 03:40:04 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66665 00:11:49.986 killing process with pid 66665 00:11:49.986 03:40:04 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:49.986 03:40:04 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:49.986 03:40:04 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66665' 00:11:49.986 03:40:04 app_cmdline -- common/autotest_common.sh@967 -- # kill 66665 00:11:49.986 03:40:04 app_cmdline -- common/autotest_common.sh@972 -- # wait 66665 00:11:52.514 ************************************ 00:11:52.514 END TEST app_cmdline 00:11:52.514 ************************************ 00:11:52.514 00:11:52.514 real 0m4.138s 00:11:52.514 user 0m4.664s 00:11:52.514 sys 0m0.479s 00:11:52.514 03:40:06 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:52.514 03:40:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:52.514 03:40:06 -- common/autotest_common.sh@1142 -- # return 0 00:11:52.514 03:40:06 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:52.514 03:40:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:52.514 03:40:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:52.514 03:40:06 -- common/autotest_common.sh@10 -- # set +x 00:11:52.514 ************************************ 00:11:52.514 START TEST version 00:11:52.514 ************************************ 00:11:52.514 03:40:06 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:52.514 * Looking for test storage... 00:11:52.514 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:52.514 03:40:06 version -- app/version.sh@17 -- # get_header_version major 00:11:52.514 03:40:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:52.514 03:40:06 version -- app/version.sh@14 -- # tr -d '"' 00:11:52.514 03:40:06 version -- app/version.sh@14 -- # cut -f2 00:11:52.514 03:40:06 version -- app/version.sh@17 -- # major=24 00:11:52.514 03:40:06 version -- app/version.sh@18 -- # get_header_version minor 00:11:52.514 03:40:06 version -- app/version.sh@14 -- # tr -d '"' 00:11:52.514 03:40:06 version -- app/version.sh@14 -- # cut -f2 00:11:52.514 03:40:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:52.514 03:40:06 version -- app/version.sh@18 -- # minor=9 00:11:52.514 03:40:06 version -- app/version.sh@19 -- # get_header_version patch 00:11:52.514 03:40:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:52.514 03:40:06 version -- app/version.sh@14 -- # cut -f2 00:11:52.514 03:40:06 version -- app/version.sh@14 -- # tr -d '"' 00:11:52.514 03:40:06 version -- app/version.sh@19 -- # patch=0 00:11:52.514 03:40:06 version -- app/version.sh@20 -- # get_header_version suffix 00:11:52.514 03:40:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:52.514 03:40:06 version -- app/version.sh@14 -- # cut -f2 00:11:52.514 03:40:06 version -- app/version.sh@14 -- # tr -d '"' 00:11:52.514 03:40:06 version -- app/version.sh@20 -- # suffix=-pre 00:11:52.514 03:40:06 version -- app/version.sh@22 -- # version=24.9 00:11:52.514 03:40:06 version -- app/version.sh@25 -- # (( patch != 0 )) 00:11:52.514 03:40:06 version -- app/version.sh@28 -- # version=24.9rc0 00:11:52.514 03:40:06 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:52.514 03:40:06 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:52.514 03:40:07 version -- app/version.sh@30 -- # py_version=24.9rc0 00:11:52.514 03:40:07 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:11:52.514 00:11:52.514 real 0m0.137s 00:11:52.515 user 0m0.080s 00:11:52.515 sys 0m0.085s 00:11:52.515 03:40:07 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:52.515 ************************************ 00:11:52.515 END TEST version 00:11:52.515 ************************************ 00:11:52.515 03:40:07 version -- common/autotest_common.sh@10 -- # set +x 00:11:52.515 03:40:07 -- common/autotest_common.sh@1142 -- # return 0 00:11:52.515 03:40:07 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:11:52.515 03:40:07 -- spdk/autotest.sh@198 -- # uname -s 00:11:52.515 03:40:07 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:11:52.515 03:40:07 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:11:52.515 03:40:07 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:11:52.515 03:40:07 -- spdk/autotest.sh@211 -- # '[' 1 -eq 1 ']' 00:11:52.515 03:40:07 -- spdk/autotest.sh@212 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:11:52.515 03:40:07 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:52.515 03:40:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:52.515 03:40:07 -- common/autotest_common.sh@10 -- # set +x 00:11:52.515 ************************************ 00:11:52.515 START TEST blockdev_nvme 00:11:52.515 ************************************ 00:11:52.515 03:40:07 blockdev_nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:11:52.515 * Looking for test storage... 00:11:52.515 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:52.515 03:40:07 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:52.515 03:40:07 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:11:52.515 03:40:07 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:52.515 03:40:07 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:52.515 03:40:07 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:11:52.515 03:40:07 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:11:52.515 03:40:07 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:11:52.515 03:40:07 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:11:52.515 03:40:07 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:11:52.515 03:40:07 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:11:52.515 03:40:07 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:11:52.515 03:40:07 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:11:52.515 03:40:07 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:11:52.515 03:40:07 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:11:52.515 03:40:07 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:11:52.515 03:40:07 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:11:52.515 03:40:07 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:11:52.515 03:40:07 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:11:52.515 03:40:07 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:11:52.515 03:40:07 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:11:52.515 03:40:07 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:11:52.515 03:40:07 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:11:52.515 03:40:07 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:11:52.515 03:40:07 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:11:52.515 03:40:07 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=66832 00:11:52.515 03:40:07 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:52.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.515 03:40:07 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:52.515 03:40:07 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 66832 00:11:52.515 03:40:07 blockdev_nvme -- common/autotest_common.sh@829 -- # '[' -z 66832 ']' 00:11:52.515 03:40:07 blockdev_nvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.515 03:40:07 blockdev_nvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:52.515 03:40:07 blockdev_nvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.515 03:40:07 blockdev_nvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:52.515 03:40:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:52.515 [2024-07-26 03:40:07.254475] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:11:52.515 [2024-07-26 03:40:07.254933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66832 ] 00:11:52.774 [2024-07-26 03:40:07.425363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.032 [2024-07-26 03:40:07.683265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.598 03:40:08 blockdev_nvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:53.598 03:40:08 blockdev_nvme -- common/autotest_common.sh@862 -- # return 0 00:11:53.598 03:40:08 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:11:53.598 03:40:08 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:11:53.598 03:40:08 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:11:53.598 03:40:08 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:11:53.598 03:40:08 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:53.598 03:40:08 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:11:53.598 03:40:08 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.598 03:40:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:53.855 03:40:08 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.855 03:40:08 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:11:53.855 03:40:08 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.855 03:40:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:54.114 03:40:08 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.114 03:40:08 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:11:54.114 03:40:08 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:11:54.114 03:40:08 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.114 03:40:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:54.114 03:40:08 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.114 03:40:08 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:11:54.114 03:40:08 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.114 03:40:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:54.114 03:40:08 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.114 03:40:08 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:54.114 03:40:08 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.114 03:40:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:54.114 03:40:08 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.114 03:40:08 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:11:54.114 03:40:08 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:11:54.114 03:40:08 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.114 03:40:08 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:11:54.114 03:40:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:54.114 03:40:08 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.114 03:40:08 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:11:54.114 03:40:08 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:11:54.115 03:40:08 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "b4f2a43a-4341-4ea5-a456-8393db2ad18c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "b4f2a43a-4341-4ea5-a456-8393db2ad18c",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "4ca8e30d-7fbf-46b9-9566-a232f2315b1f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "4ca8e30d-7fbf-46b9-9566-a232f2315b1f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "9d83adce-8248-457e-b9b3-1f93122d1b73"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9d83adce-8248-457e-b9b3-1f93122d1b73",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "186db801-d426-48ac-a2ec-d5f2922bc733"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "186db801-d426-48ac-a2ec-d5f2922bc733",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "6d08c9bc-4b35-4a06-a725-2f3735b2c9be"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6d08c9bc-4b35-4a06-a725-2f3735b2c9be",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "9bc53ecd-b691-41a0-8791-536e730bb2f6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "9bc53ecd-b691-41a0-8791-536e730bb2f6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:11:54.115 03:40:08 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:11:54.115 03:40:08 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:11:54.115 03:40:08 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:11:54.115 03:40:08 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 66832 00:11:54.115 03:40:08 blockdev_nvme -- common/autotest_common.sh@948 -- # '[' -z 66832 ']' 00:11:54.115 03:40:08 blockdev_nvme -- common/autotest_common.sh@952 -- # kill -0 66832 00:11:54.115 03:40:08 blockdev_nvme -- common/autotest_common.sh@953 -- # uname 00:11:54.115 03:40:08 blockdev_nvme -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:54.115 03:40:08 blockdev_nvme -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66832 00:11:54.115 killing process with pid 66832 00:11:54.115 03:40:08 blockdev_nvme -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:54.115 03:40:08 blockdev_nvme -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:54.115 03:40:09 blockdev_nvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66832' 00:11:54.115 03:40:09 blockdev_nvme -- common/autotest_common.sh@967 -- # kill 66832 00:11:54.115 03:40:09 blockdev_nvme -- common/autotest_common.sh@972 -- # wait 66832 00:11:56.645 03:40:11 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:56.645 03:40:11 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:11:56.645 03:40:11 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:11:56.645 03:40:11 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:56.645 03:40:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:56.645 ************************************ 00:11:56.645 START TEST bdev_hello_world 00:11:56.645 ************************************ 00:11:56.645 03:40:11 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:11:56.645 [2024-07-26 03:40:11.471364] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:11:56.645 [2024-07-26 03:40:11.471522] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66933 ] 00:11:56.903 [2024-07-26 03:40:11.629486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.160 [2024-07-26 03:40:11.816267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.727 [2024-07-26 03:40:12.453632] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:57.727 [2024-07-26 03:40:12.453704] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:11:57.727 [2024-07-26 03:40:12.453742] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:57.727 [2024-07-26 03:40:12.456836] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:57.727 [2024-07-26 03:40:12.457307] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:57.727 [2024-07-26 03:40:12.457349] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:57.727 [2024-07-26 03:40:12.457522] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:57.727 00:11:57.727 [2024-07-26 03:40:12.457570] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:58.660 ************************************ 00:11:58.660 END TEST bdev_hello_world 00:11:58.660 ************************************ 00:11:58.660 00:11:58.660 real 0m2.190s 00:11:58.660 user 0m1.856s 00:11:58.660 sys 0m0.220s 00:11:58.660 03:40:13 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:58.660 03:40:13 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:11:58.918 03:40:13 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:58.918 03:40:13 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:11:58.918 03:40:13 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:58.918 03:40:13 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:58.918 03:40:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:58.918 ************************************ 00:11:58.918 START TEST bdev_bounds 00:11:58.918 ************************************ 00:11:58.918 03:40:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:11:58.918 03:40:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=66975 00:11:58.918 03:40:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:58.918 03:40:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:58.918 Process bdevio pid: 66975 00:11:58.918 03:40:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 66975' 00:11:58.918 03:40:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 66975 00:11:58.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.918 03:40:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 66975 ']' 00:11:58.918 03:40:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.918 03:40:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:58.918 03:40:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.918 03:40:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:58.918 03:40:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:58.918 [2024-07-26 03:40:13.727166] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:11:58.918 [2024-07-26 03:40:13.727403] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66975 ] 00:11:59.176 [2024-07-26 03:40:13.913801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:59.434 [2024-07-26 03:40:14.140302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.434 [2024-07-26 03:40:14.140370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.434 [2024-07-26 03:40:14.140373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.001 03:40:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:00.001 03:40:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:12:00.001 03:40:14 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:00.260 I/O targets: 00:12:00.260 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:00.260 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:00.260 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:00.260 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:00.260 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:00.260 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:00.260 00:12:00.260 00:12:00.260 CUnit - A unit testing framework for C - Version 2.1-3 00:12:00.260 http://cunit.sourceforge.net/ 00:12:00.260 00:12:00.260 00:12:00.260 Suite: bdevio tests on: Nvme3n1 00:12:00.260 Test: blockdev write read block ...passed 00:12:00.260 Test: blockdev write zeroes read block ...passed 00:12:00.260 Test: blockdev write zeroes read no split ...passed 00:12:00.260 Test: blockdev write zeroes read split ...passed 00:12:00.260 Test: blockdev write zeroes read split partial ...passed 00:12:00.260 Test: blockdev reset ...[2024-07-26 03:40:15.030855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:12:00.260 passed 00:12:00.260 Test: blockdev write read 8 blocks ...[2024-07-26 03:40:15.035515] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:00.260 passed 00:12:00.260 Test: blockdev write read size > 128k ...passed 00:12:00.260 Test: blockdev write read invalid size ...passed 00:12:00.260 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:00.260 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:00.260 Test: blockdev write read max offset ...passed 00:12:00.260 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:00.260 Test: blockdev writev readv 8 blocks ...passed 00:12:00.260 Test: blockdev writev readv 30 x 1block ...passed 00:12:00.260 Test: blockdev writev readv block ...passed 00:12:00.260 Test: blockdev writev readv size > 128k ...passed 00:12:00.260 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:00.260 Test: blockdev comparev and writev ...[2024-07-26 03:40:15.044706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26800a000 len:0x1000 00:12:00.260 [2024-07-26 03:40:15.044802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:00.260 passed 00:12:00.260 Test: blockdev nvme passthru rw ...passed 00:12:00.260 Test: blockdev nvme passthru vendor specific ...[2024-07-26 03:40:15.045657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:00.260 [2024-07-26 03:40:15.045711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:00.260 passed 00:12:00.260 Test: blockdev nvme admin passthru ...passed 00:12:00.260 Test: blockdev copy ...passed 00:12:00.260 Suite: bdevio tests on: Nvme2n3 00:12:00.260 Test: blockdev write read block ...passed 00:12:00.260 Test: blockdev write zeroes read block ...passed 00:12:00.260 Test: blockdev write zeroes read no split ...passed 00:12:00.260 Test: blockdev write zeroes read split ...passed 00:12:00.260 Test: blockdev write zeroes read split partial ...passed 00:12:00.260 Test: blockdev reset ...[2024-07-26 03:40:15.136435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:12:00.260 [2024-07-26 03:40:15.140600] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:00.260 passed 00:12:00.260 Test: blockdev write read 8 blocks ...passed 00:12:00.260 Test: blockdev write read size > 128k ...passed 00:12:00.260 Test: blockdev write read invalid size ...passed 00:12:00.260 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:00.260 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:00.260 Test: blockdev write read max offset ...passed 00:12:00.260 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:00.260 Test: blockdev writev readv 8 blocks ...passed 00:12:00.260 Test: blockdev writev readv 30 x 1block ...passed 00:12:00.260 Test: blockdev writev readv block ...passed 00:12:00.260 Test: blockdev writev readv size > 128k ...passed 00:12:00.260 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:00.260 Test: blockdev comparev and writev ...[2024-07-26 03:40:15.149406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x277804000 len:0x1000 00:12:00.260 [2024-07-26 03:40:15.149483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:00.260 passed 00:12:00.260 Test: blockdev nvme passthru rw ...passed 00:12:00.260 Test: blockdev nvme passthru vendor specific ...[2024-07-26 03:40:15.150241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:00.260 [2024-07-26 03:40:15.150284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:00.260 passed 00:12:00.260 Test: blockdev nvme admin passthru ...passed 00:12:00.260 Test: blockdev copy ...passed 00:12:00.260 Suite: bdevio tests on: Nvme2n2 00:12:00.260 Test: blockdev write read block ...passed 00:12:00.260 Test: blockdev write zeroes read block ...passed 00:12:00.518 Test: blockdev write zeroes read no split ...passed 00:12:00.518 Test: blockdev write zeroes read split ...passed 00:12:00.518 Test: blockdev write zeroes read split partial ...passed 00:12:00.518 Test: blockdev reset ...[2024-07-26 03:40:15.219068] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:12:00.519 [2024-07-26 03:40:15.223479] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:00.519 passed 00:12:00.519 Test: blockdev write read 8 blocks ...passed 00:12:00.519 Test: blockdev write read size > 128k ...passed 00:12:00.519 Test: blockdev write read invalid size ...passed 00:12:00.519 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:00.519 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:00.519 Test: blockdev write read max offset ...passed 00:12:00.519 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:00.519 Test: blockdev writev readv 8 blocks ...passed 00:12:00.519 Test: blockdev writev readv 30 x 1block ...passed 00:12:00.519 Test: blockdev writev readv block ...passed 00:12:00.519 Test: blockdev writev readv size > 128k ...passed 00:12:00.519 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:00.519 Test: blockdev comparev and writev ...[2024-07-26 03:40:15.230383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27423a000 len:0x1000 00:12:00.519 [2024-07-26 03:40:15.230469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:00.519 passed 00:12:00.519 Test: blockdev nvme passthru rw ...passed 00:12:00.519 Test: blockdev nvme passthru vendor specific ...passed 00:12:00.519 Test: blockdev nvme admin passthru ...[2024-07-26 03:40:15.231294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:00.519 [2024-07-26 03:40:15.231334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:00.519 passed 00:12:00.519 Test: blockdev copy ...passed 00:12:00.519 Suite: bdevio tests on: Nvme2n1 00:12:00.519 Test: blockdev write read block ...passed 00:12:00.519 Test: blockdev write zeroes read block ...passed 00:12:00.519 Test: blockdev write zeroes read no split ...passed 00:12:00.519 Test: blockdev write zeroes read split ...passed 00:12:00.519 Test: blockdev write zeroes read split partial ...passed 00:12:00.519 Test: blockdev reset ...[2024-07-26 03:40:15.299519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:12:00.519 passed 00:12:00.519 Test: blockdev write read 8 blocks ...[2024-07-26 03:40:15.304152] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:00.519 passed 00:12:00.519 Test: blockdev write read size > 128k ...passed 00:12:00.519 Test: blockdev write read invalid size ...passed 00:12:00.519 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:00.519 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:00.519 Test: blockdev write read max offset ...passed 00:12:00.519 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:00.519 Test: blockdev writev readv 8 blocks ...passed 00:12:00.519 Test: blockdev writev readv 30 x 1block ...passed 00:12:00.519 Test: blockdev writev readv block ...passed 00:12:00.519 Test: blockdev writev readv size > 128k ...passed 00:12:00.519 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:00.519 Test: blockdev comparev and writev ...passed 00:12:00.519 Test: blockdev nvme passthru rw ...[2024-07-26 03:40:15.310974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x274234000 len:0x1000 00:12:00.519 [2024-07-26 03:40:15.311048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:00.519 passed 00:12:00.519 Test: blockdev nvme passthru vendor specific ...[2024-07-26 03:40:15.311668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:00.519 [2024-07-26 03:40:15.311712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:00.519 passed 00:12:00.519 Test: blockdev nvme admin passthru ...passed 00:12:00.519 Test: blockdev copy ...passed 00:12:00.519 Suite: bdevio tests on: Nvme1n1 00:12:00.519 Test: blockdev write read block ...passed 00:12:00.519 Test: blockdev write zeroes read block ...passed 00:12:00.519 Test: blockdev write zeroes read no split ...passed 00:12:00.519 Test: blockdev write zeroes read split ...passed 00:12:00.519 Test: blockdev write zeroes read split partial ...passed 00:12:00.519 Test: blockdev reset ...[2024-07-26 03:40:15.397763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:12:00.519 [2024-07-26 03:40:15.401570] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:00.519 passed 00:12:00.519 Test: blockdev write read 8 blocks ...passed 00:12:00.519 Test: blockdev write read size > 128k ...passed 00:12:00.519 Test: blockdev write read invalid size ...passed 00:12:00.519 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:00.519 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:00.519 Test: blockdev write read max offset ...passed 00:12:00.519 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:00.519 Test: blockdev writev readv 8 blocks ...passed 00:12:00.519 Test: blockdev writev readv 30 x 1block ...passed 00:12:00.519 Test: blockdev writev readv block ...passed 00:12:00.519 Test: blockdev writev readv size > 128k ...passed 00:12:00.519 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:00.519 Test: blockdev comparev and writev ...[2024-07-26 03:40:15.407830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x274230000 len:0x1000 00:12:00.519 [2024-07-26 03:40:15.407906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:00.519 passed 00:12:00.519 Test: blockdev nvme passthru rw ...passed 00:12:00.519 Test: blockdev nvme passthru vendor specific ...passed 00:12:00.519 Test: blockdev nvme admin passthru ...[2024-07-26 03:40:15.408593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:00.519 [2024-07-26 03:40:15.408645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:00.519 passed 00:12:00.519 Test: blockdev copy ...passed 00:12:00.519 Suite: bdevio tests on: Nvme0n1 00:12:00.519 Test: blockdev write read block ...passed 00:12:00.519 Test: blockdev write zeroes read block ...passed 00:12:00.778 Test: blockdev write zeroes read no split ...passed 00:12:00.778 Test: blockdev write zeroes read split ...passed 00:12:00.778 Test: blockdev write zeroes read split partial ...passed 00:12:00.778 Test: blockdev reset ...[2024-07-26 03:40:15.482311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:12:00.778 passed 00:12:00.778 Test: blockdev write read 8 blocks ...[2024-07-26 03:40:15.486161] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:00.778 passed 00:12:00.778 Test: blockdev write read size > 128k ...passed 00:12:00.778 Test: blockdev write read invalid size ...passed 00:12:00.778 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:00.778 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:00.778 Test: blockdev write read max offset ...passed 00:12:00.778 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:00.778 Test: blockdev writev readv 8 blocks ...passed 00:12:00.778 Test: blockdev writev readv 30 x 1block ...passed 00:12:00.778 Test: blockdev writev readv block ...passed 00:12:00.778 Test: blockdev writev readv size > 128k ...passed 00:12:00.778 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:00.778 Test: blockdev comparev and writev ...passed 00:12:00.778 Test: blockdev nvme passthru rw ...[2024-07-26 03:40:15.492967] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:12:00.778 separate metadata which is not supported yet. 00:12:00.778 passed 00:12:00.778 Test: blockdev nvme passthru vendor specific ...[2024-07-26 03:40:15.493524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:12:00.778 passed 00:12:00.778 Test: blockdev nvme admin passthru ...[2024-07-26 03:40:15.493599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:12:00.778 passed 00:12:00.778 Test: blockdev copy ...passed 00:12:00.778 00:12:00.778 Run Summary: Type Total Ran Passed Failed Inactive 00:12:00.778 suites 6 6 n/a 0 0 00:12:00.778 tests 138 138 138 0 0 00:12:00.778 asserts 893 893 893 0 n/a 00:12:00.778 00:12:00.778 Elapsed time = 1.524 seconds 00:12:00.778 0 00:12:00.778 03:40:15 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 66975 00:12:00.778 03:40:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 66975 ']' 00:12:00.778 03:40:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 66975 00:12:00.778 03:40:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:12:00.778 03:40:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:00.778 03:40:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66975 00:12:00.778 03:40:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:00.778 killing process with pid 66975 00:12:00.778 03:40:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:00.778 03:40:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66975' 00:12:00.778 03:40:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 66975 00:12:00.778 03:40:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 66975 00:12:01.711 03:40:16 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:12:01.711 00:12:01.711 real 0m2.994s 00:12:01.711 user 0m7.448s 00:12:01.711 sys 0m0.397s 00:12:01.711 03:40:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:01.711 03:40:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:01.711 ************************************ 00:12:01.711 END TEST bdev_bounds 00:12:01.711 ************************************ 00:12:01.969 03:40:16 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:12:01.969 03:40:16 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:12:01.969 03:40:16 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:01.969 03:40:16 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:01.969 03:40:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:01.969 ************************************ 00:12:01.969 START TEST bdev_nbd 00:12:01.969 ************************************ 00:12:01.969 03:40:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:12:01.969 03:40:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:12:01.969 03:40:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:12:01.969 03:40:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:01.969 03:40:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:01.969 03:40:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:01.969 03:40:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:12:01.969 03:40:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:12:01.969 03:40:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:12:01.969 03:40:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:01.969 03:40:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:12:01.969 03:40:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:12:01.969 03:40:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:01.969 03:40:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:12:01.969 03:40:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:01.969 03:40:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:12:01.969 03:40:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=67040 00:12:01.969 03:40:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:01.969 03:40:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:01.969 03:40:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 67040 /var/tmp/spdk-nbd.sock 00:12:01.969 03:40:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 67040 ']' 00:12:01.969 03:40:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:01.969 03:40:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:01.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:01.969 03:40:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:01.969 03:40:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:01.969 03:40:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:01.969 [2024-07-26 03:40:16.744487] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:12:01.969 [2024-07-26 03:40:16.744640] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:02.232 [2024-07-26 03:40:16.908985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.232 [2024-07-26 03:40:17.107144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.183 03:40:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:03.183 03:40:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:12:03.183 03:40:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:12:03.183 03:40:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:03.183 03:40:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:03.183 03:40:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:03.183 03:40:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:12:03.183 03:40:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:03.183 03:40:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:03.183 03:40:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:03.183 03:40:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:03.183 03:40:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:03.183 03:40:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:03.183 03:40:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:03.183 03:40:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:12:03.441 03:40:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:03.441 03:40:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:03.441 03:40:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:03.441 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:03.441 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:03.441 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:03.441 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:03.441 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:03.441 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:03.441 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:03.441 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:03.441 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:03.441 1+0 records in 00:12:03.441 1+0 records out 00:12:03.441 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000566175 s, 7.2 MB/s 00:12:03.441 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.441 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:03.441 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.441 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:03.441 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:03.441 03:40:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:03.441 03:40:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:03.441 03:40:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:12:03.699 03:40:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:03.699 03:40:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:03.699 03:40:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:03.699 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:03.699 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:03.699 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:03.699 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:03.699 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:03.699 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:03.699 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:03.699 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:03.699 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:03.699 1+0 records in 00:12:03.699 1+0 records out 00:12:03.699 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000633616 s, 6.5 MB/s 00:12:03.699 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.699 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:03.699 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.699 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:03.699 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:03.699 03:40:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:03.699 03:40:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:03.699 03:40:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:12:03.957 03:40:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:03.957 03:40:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:03.957 03:40:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:03.958 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:12:03.958 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:03.958 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:03.958 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:03.958 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:12:03.958 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:03.958 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:03.958 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:03.958 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:03.958 1+0 records in 00:12:03.958 1+0 records out 00:12:03.958 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000559176 s, 7.3 MB/s 00:12:03.958 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.958 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:03.958 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.958 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:03.958 03:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:03.958 03:40:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:03.958 03:40:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:03.958 03:40:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:12:04.215 03:40:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:04.215 03:40:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:04.215 03:40:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:04.215 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:12:04.215 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:04.215 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:04.216 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:04.216 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:12:04.216 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:04.216 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:04.216 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:04.216 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:04.216 1+0 records in 00:12:04.216 1+0 records out 00:12:04.216 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000701373 s, 5.8 MB/s 00:12:04.216 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.216 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:04.216 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.216 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:04.216 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:04.216 03:40:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:04.216 03:40:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:04.216 03:40:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:12:04.780 03:40:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:04.780 03:40:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:04.780 03:40:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:04.780 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:12:04.780 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:04.780 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:04.780 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:04.780 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:12:04.780 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:04.780 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:04.780 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:04.780 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:04.780 1+0 records in 00:12:04.780 1+0 records out 00:12:04.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000709218 s, 5.8 MB/s 00:12:04.780 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.780 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:04.781 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.781 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:04.781 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:04.781 03:40:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:04.781 03:40:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:04.781 03:40:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:12:05.038 03:40:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:05.038 03:40:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:05.038 03:40:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:05.038 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:12:05.038 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:05.038 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:05.038 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:05.038 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:12:05.038 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:05.038 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:05.038 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:05.038 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:05.038 1+0 records in 00:12:05.038 1+0 records out 00:12:05.038 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00087194 s, 4.7 MB/s 00:12:05.038 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.038 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:05.038 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.038 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:05.038 03:40:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:05.038 03:40:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:05.038 03:40:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:05.038 03:40:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:05.296 03:40:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:05.296 { 00:12:05.296 "nbd_device": "/dev/nbd0", 00:12:05.296 "bdev_name": "Nvme0n1" 00:12:05.296 }, 00:12:05.296 { 00:12:05.296 "nbd_device": "/dev/nbd1", 00:12:05.296 "bdev_name": "Nvme1n1" 00:12:05.296 }, 00:12:05.296 { 00:12:05.296 "nbd_device": "/dev/nbd2", 00:12:05.296 "bdev_name": "Nvme2n1" 00:12:05.296 }, 00:12:05.296 { 00:12:05.296 "nbd_device": "/dev/nbd3", 00:12:05.296 "bdev_name": "Nvme2n2" 00:12:05.296 }, 00:12:05.296 { 00:12:05.296 "nbd_device": "/dev/nbd4", 00:12:05.296 "bdev_name": "Nvme2n3" 00:12:05.296 }, 00:12:05.296 { 00:12:05.296 "nbd_device": "/dev/nbd5", 00:12:05.296 "bdev_name": "Nvme3n1" 00:12:05.296 } 00:12:05.296 ]' 00:12:05.296 03:40:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:05.296 03:40:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:05.296 { 00:12:05.296 "nbd_device": "/dev/nbd0", 00:12:05.296 "bdev_name": "Nvme0n1" 00:12:05.296 }, 00:12:05.296 { 00:12:05.296 "nbd_device": "/dev/nbd1", 00:12:05.296 "bdev_name": "Nvme1n1" 00:12:05.296 }, 00:12:05.296 { 00:12:05.296 "nbd_device": "/dev/nbd2", 00:12:05.296 "bdev_name": "Nvme2n1" 00:12:05.296 }, 00:12:05.296 { 00:12:05.296 "nbd_device": "/dev/nbd3", 00:12:05.296 "bdev_name": "Nvme2n2" 00:12:05.296 }, 00:12:05.297 { 00:12:05.297 "nbd_device": "/dev/nbd4", 00:12:05.297 "bdev_name": "Nvme2n3" 00:12:05.297 }, 00:12:05.297 { 00:12:05.297 "nbd_device": "/dev/nbd5", 00:12:05.297 "bdev_name": "Nvme3n1" 00:12:05.297 } 00:12:05.297 ]' 00:12:05.297 03:40:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:05.297 03:40:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:12:05.297 03:40:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:05.297 03:40:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:12:05.297 03:40:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:05.297 03:40:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:05.297 03:40:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:05.297 03:40:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:05.558 03:40:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:05.558 03:40:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:05.558 03:40:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:05.558 03:40:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:05.558 03:40:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:05.558 03:40:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:05.558 03:40:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:05.558 03:40:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:05.558 03:40:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:05.558 03:40:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:05.815 03:40:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:05.815 03:40:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:05.815 03:40:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:05.815 03:40:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:05.815 03:40:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:05.815 03:40:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:05.815 03:40:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:05.815 03:40:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:05.815 03:40:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:05.815 03:40:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:06.074 03:40:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:06.074 03:40:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:06.074 03:40:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:06.074 03:40:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:06.074 03:40:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:06.074 03:40:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:06.074 03:40:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:06.074 03:40:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:06.074 03:40:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:06.074 03:40:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:06.641 03:40:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:06.641 03:40:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:06.641 03:40:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:06.641 03:40:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:06.641 03:40:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:06.641 03:40:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:06.641 03:40:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:06.641 03:40:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:06.641 03:40:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:06.641 03:40:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:06.641 03:40:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:06.899 03:40:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:06.899 03:40:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:06.899 03:40:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:06.899 03:40:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:06.899 03:40:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:06.899 03:40:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:06.899 03:40:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:06.899 03:40:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:06.899 03:40:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:07.156 03:40:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:07.156 03:40:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:07.156 03:40:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:07.156 03:40:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:07.156 03:40:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:07.156 03:40:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:07.156 03:40:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:07.156 03:40:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:07.156 03:40:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:07.156 03:40:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:07.156 03:40:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:07.415 03:40:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:07.415 03:40:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:07.415 03:40:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:07.415 03:40:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:07.415 03:40:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:07.415 03:40:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:07.415 03:40:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:07.415 03:40:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:07.415 03:40:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:07.415 03:40:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:12:07.415 03:40:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:07.415 03:40:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:12:07.415 03:40:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:07.415 03:40:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:07.415 03:40:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:07.415 03:40:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:07.415 03:40:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:07.415 03:40:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:07.415 03:40:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:07.415 03:40:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:07.415 03:40:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:07.415 03:40:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:07.415 03:40:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:07.415 03:40:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:07.415 03:40:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:12:07.415 03:40:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:07.415 03:40:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:07.415 03:40:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:12:07.673 /dev/nbd0 00:12:07.673 03:40:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:07.673 03:40:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:07.673 03:40:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:07.673 03:40:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:07.673 03:40:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:07.673 03:40:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:07.673 03:40:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:07.673 03:40:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:07.673 03:40:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:07.673 03:40:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:07.673 03:40:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:07.673 1+0 records in 00:12:07.673 1+0 records out 00:12:07.673 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000548652 s, 7.5 MB/s 00:12:07.673 03:40:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.673 03:40:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:07.673 03:40:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.673 03:40:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:07.673 03:40:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:07.673 03:40:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:07.673 03:40:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:07.673 03:40:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:12:07.930 /dev/nbd1 00:12:08.188 03:40:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:08.188 03:40:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:08.188 03:40:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:08.188 03:40:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:08.188 03:40:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:08.188 03:40:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:08.188 03:40:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:08.188 03:40:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:08.188 03:40:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:08.188 03:40:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:08.188 03:40:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:08.188 1+0 records in 00:12:08.188 1+0 records out 00:12:08.188 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000610118 s, 6.7 MB/s 00:12:08.188 03:40:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.188 03:40:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:08.188 03:40:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.188 03:40:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:08.188 03:40:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:08.188 03:40:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:08.188 03:40:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:08.188 03:40:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:12:08.446 /dev/nbd10 00:12:08.446 03:40:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:08.446 03:40:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:08.446 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:12:08.446 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:08.446 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:08.446 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:08.446 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:12:08.446 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:08.446 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:08.446 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:08.446 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:08.446 1+0 records in 00:12:08.446 1+0 records out 00:12:08.446 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000708172 s, 5.8 MB/s 00:12:08.446 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.446 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:08.446 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.446 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:08.446 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:08.446 03:40:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:08.446 03:40:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:08.446 03:40:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:12:08.703 /dev/nbd11 00:12:08.703 03:40:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:08.703 03:40:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:08.703 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:12:08.703 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:08.703 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:08.703 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:08.703 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:12:08.703 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:08.703 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:08.703 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:08.703 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:08.703 1+0 records in 00:12:08.703 1+0 records out 00:12:08.703 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000735033 s, 5.6 MB/s 00:12:08.703 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.703 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:08.703 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.703 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:08.703 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:08.703 03:40:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:08.703 03:40:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:08.703 03:40:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:12:09.001 /dev/nbd12 00:12:09.001 03:40:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:09.001 03:40:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:09.001 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:12:09.001 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:09.001 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:09.001 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:09.001 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:12:09.001 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:09.001 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:09.001 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:09.001 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:09.001 1+0 records in 00:12:09.001 1+0 records out 00:12:09.001 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000797571 s, 5.1 MB/s 00:12:09.001 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:09.001 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:09.001 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:09.001 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:09.001 03:40:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:09.001 03:40:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:09.001 03:40:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:09.001 03:40:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:12:09.259 /dev/nbd13 00:12:09.259 03:40:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:09.259 03:40:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:09.259 03:40:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:12:09.259 03:40:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:09.259 03:40:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:09.259 03:40:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:09.259 03:40:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:12:09.259 03:40:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:09.259 03:40:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:09.259 03:40:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:09.259 03:40:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:09.259 1+0 records in 00:12:09.259 1+0 records out 00:12:09.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000747697 s, 5.5 MB/s 00:12:09.259 03:40:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:09.259 03:40:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:09.259 03:40:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:09.259 03:40:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:09.259 03:40:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:09.259 03:40:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:09.259 03:40:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:09.259 03:40:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:09.259 03:40:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:09.259 03:40:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:09.516 03:40:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:09.516 { 00:12:09.516 "nbd_device": "/dev/nbd0", 00:12:09.516 "bdev_name": "Nvme0n1" 00:12:09.516 }, 00:12:09.516 { 00:12:09.516 "nbd_device": "/dev/nbd1", 00:12:09.516 "bdev_name": "Nvme1n1" 00:12:09.516 }, 00:12:09.516 { 00:12:09.516 "nbd_device": "/dev/nbd10", 00:12:09.516 "bdev_name": "Nvme2n1" 00:12:09.516 }, 00:12:09.516 { 00:12:09.516 "nbd_device": "/dev/nbd11", 00:12:09.516 "bdev_name": "Nvme2n2" 00:12:09.516 }, 00:12:09.517 { 00:12:09.517 "nbd_device": "/dev/nbd12", 00:12:09.517 "bdev_name": "Nvme2n3" 00:12:09.517 }, 00:12:09.517 { 00:12:09.517 "nbd_device": "/dev/nbd13", 00:12:09.517 "bdev_name": "Nvme3n1" 00:12:09.517 } 00:12:09.517 ]' 00:12:09.517 03:40:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:09.517 { 00:12:09.517 "nbd_device": "/dev/nbd0", 00:12:09.517 "bdev_name": "Nvme0n1" 00:12:09.517 }, 00:12:09.517 { 00:12:09.517 "nbd_device": "/dev/nbd1", 00:12:09.517 "bdev_name": "Nvme1n1" 00:12:09.517 }, 00:12:09.517 { 00:12:09.517 "nbd_device": "/dev/nbd10", 00:12:09.517 "bdev_name": "Nvme2n1" 00:12:09.517 }, 00:12:09.517 { 00:12:09.517 "nbd_device": "/dev/nbd11", 00:12:09.517 "bdev_name": "Nvme2n2" 00:12:09.517 }, 00:12:09.517 { 00:12:09.517 "nbd_device": "/dev/nbd12", 00:12:09.517 "bdev_name": "Nvme2n3" 00:12:09.517 }, 00:12:09.517 { 00:12:09.517 "nbd_device": "/dev/nbd13", 00:12:09.517 "bdev_name": "Nvme3n1" 00:12:09.517 } 00:12:09.517 ]' 00:12:09.517 03:40:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:09.774 03:40:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:09.774 /dev/nbd1 00:12:09.774 /dev/nbd10 00:12:09.774 /dev/nbd11 00:12:09.774 /dev/nbd12 00:12:09.774 /dev/nbd13' 00:12:09.774 03:40:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:09.774 /dev/nbd1 00:12:09.774 /dev/nbd10 00:12:09.774 /dev/nbd11 00:12:09.774 /dev/nbd12 00:12:09.774 /dev/nbd13' 00:12:09.774 03:40:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:09.774 03:40:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:12:09.774 03:40:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:12:09.774 03:40:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:12:09.775 03:40:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:12:09.775 03:40:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:12:09.775 03:40:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:09.775 03:40:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:09.775 03:40:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:09.775 03:40:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:09.775 03:40:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:09.775 03:40:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:09.775 256+0 records in 00:12:09.775 256+0 records out 00:12:09.775 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00691864 s, 152 MB/s 00:12:09.775 03:40:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:09.775 03:40:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:09.775 256+0 records in 00:12:09.775 256+0 records out 00:12:09.775 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12545 s, 8.4 MB/s 00:12:09.775 03:40:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:09.775 03:40:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:10.033 256+0 records in 00:12:10.033 256+0 records out 00:12:10.033 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125514 s, 8.4 MB/s 00:12:10.033 03:40:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:10.033 03:40:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:10.033 256+0 records in 00:12:10.033 256+0 records out 00:12:10.033 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126215 s, 8.3 MB/s 00:12:10.033 03:40:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:10.033 03:40:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:10.291 256+0 records in 00:12:10.291 256+0 records out 00:12:10.291 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126983 s, 8.3 MB/s 00:12:10.291 03:40:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:10.291 03:40:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:10.291 256+0 records in 00:12:10.291 256+0 records out 00:12:10.291 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126745 s, 8.3 MB/s 00:12:10.291 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:10.291 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:10.550 256+0 records in 00:12:10.550 256+0 records out 00:12:10.550 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145592 s, 7.2 MB/s 00:12:10.550 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:12:10.550 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:10.550 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:10.550 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:10.550 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:10.550 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:10.550 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:10.550 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:10.550 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:10.550 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:10.550 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:10.550 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:10.550 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:10.550 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:10.550 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:10.550 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:10.550 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:10.550 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:10.550 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:10.550 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:10.550 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:10.550 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:10.550 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:10.550 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:10.550 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:10.550 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:10.550 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:10.808 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:10.808 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:10.808 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:10.808 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:10.808 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:10.808 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:10.808 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:10.808 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:10.808 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:10.808 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:11.065 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:11.065 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:11.065 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:11.065 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:11.065 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:11.065 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:11.065 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:11.065 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:11.065 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:11.065 03:40:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:11.631 03:40:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:11.631 03:40:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:11.631 03:40:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:11.631 03:40:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:11.631 03:40:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:11.631 03:40:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:11.631 03:40:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:11.631 03:40:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:11.631 03:40:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:11.631 03:40:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:11.890 03:40:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:11.890 03:40:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:11.890 03:40:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:11.890 03:40:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:11.890 03:40:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:11.890 03:40:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:11.890 03:40:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:11.890 03:40:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:11.890 03:40:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:11.890 03:40:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:12.148 03:40:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:12.148 03:40:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:12.148 03:40:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:12.148 03:40:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:12.148 03:40:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:12.148 03:40:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:12.148 03:40:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:12.148 03:40:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:12.148 03:40:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:12.148 03:40:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:12.406 03:40:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:12.406 03:40:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:12.406 03:40:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:12.406 03:40:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:12.406 03:40:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:12.406 03:40:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:12.406 03:40:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:12.406 03:40:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:12.406 03:40:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:12.406 03:40:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:12.406 03:40:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:12.663 03:40:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:12.663 03:40:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:12.663 03:40:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:12.921 03:40:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:12.921 03:40:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:12.921 03:40:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:12.921 03:40:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:12.921 03:40:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:12.921 03:40:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:12.921 03:40:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:12:12.921 03:40:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:12.921 03:40:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:12:12.921 03:40:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:12.921 03:40:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:12.921 03:40:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:12.921 03:40:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:12:12.921 03:40:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:12:12.921 03:40:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:13.179 malloc_lvol_verify 00:12:13.179 03:40:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:13.437 f9fdaa76-5fd5-4dcc-aa2e-76df206f1368 00:12:13.437 03:40:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:13.695 9e7a856c-db92-4bec-97fa-a3facb4878d2 00:12:13.695 03:40:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:13.953 /dev/nbd0 00:12:13.953 03:40:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:12:13.953 mke2fs 1.46.5 (30-Dec-2021) 00:12:13.953 Discarding device blocks: 0/4096 done 00:12:13.953 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:13.953 00:12:13.953 Allocating group tables: 0/1 done 00:12:13.953 Writing inode tables: 0/1 done 00:12:13.953 Creating journal (1024 blocks): done 00:12:13.953 Writing superblocks and filesystem accounting information: 0/1 done 00:12:13.953 00:12:13.953 03:40:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:12:13.953 03:40:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:13.953 03:40:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:13.953 03:40:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:13.953 03:40:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:13.953 03:40:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:13.953 03:40:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:13.953 03:40:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:14.211 03:40:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:14.211 03:40:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:14.211 03:40:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:14.211 03:40:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:14.211 03:40:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:14.211 03:40:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:14.211 03:40:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:14.211 03:40:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:14.211 03:40:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:12:14.211 03:40:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:12:14.211 03:40:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 67040 00:12:14.211 03:40:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 67040 ']' 00:12:14.211 03:40:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 67040 00:12:14.211 03:40:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:12:14.211 03:40:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:14.211 03:40:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67040 00:12:14.211 killing process with pid 67040 00:12:14.211 03:40:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:14.211 03:40:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:14.211 03:40:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67040' 00:12:14.211 03:40:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@967 -- # kill 67040 00:12:14.211 03:40:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # wait 67040 00:12:15.583 03:40:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:12:15.583 00:12:15.583 real 0m13.538s 00:12:15.583 user 0m19.676s 00:12:15.583 sys 0m4.102s 00:12:15.583 03:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:15.583 ************************************ 00:12:15.583 END TEST bdev_nbd 00:12:15.583 ************************************ 00:12:15.583 03:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:15.583 03:40:30 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:12:15.583 03:40:30 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:12:15.583 03:40:30 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:12:15.583 skipping fio tests on NVMe due to multi-ns failures. 00:12:15.583 03:40:30 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:12:15.583 03:40:30 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:15.583 03:40:30 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:15.583 03:40:30 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:12:15.583 03:40:30 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:15.583 03:40:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:15.583 ************************************ 00:12:15.583 START TEST bdev_verify 00:12:15.583 ************************************ 00:12:15.583 03:40:30 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:15.583 [2024-07-26 03:40:30.348863] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:12:15.583 [2024-07-26 03:40:30.349099] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67446 ] 00:12:15.841 [2024-07-26 03:40:30.510962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:15.841 [2024-07-26 03:40:30.707648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.841 [2024-07-26 03:40:30.707648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.776 Running I/O for 5 seconds... 00:12:22.038 00:12:22.038 Latency(us) 00:12:22.038 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:22.038 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:22.038 Verification LBA range: start 0x0 length 0xbd0bd 00:12:22.038 Nvme0n1 : 5.04 1574.88 6.15 0.00 0.00 81006.03 15192.44 101044.60 00:12:22.038 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:22.038 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:12:22.038 Nvme0n1 : 5.06 1441.45 5.63 0.00 0.00 88578.41 15609.48 196369.69 00:12:22.038 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:22.039 Verification LBA range: start 0x0 length 0xa0000 00:12:22.039 Nvme1n1 : 5.04 1574.38 6.15 0.00 0.00 80844.64 17873.45 91035.46 00:12:22.039 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:22.039 Verification LBA range: start 0xa0000 length 0xa0000 00:12:22.039 Nvme1n1 : 5.06 1440.72 5.63 0.00 0.00 88467.57 18469.24 185883.93 00:12:22.039 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:22.039 Verification LBA range: start 0x0 length 0x80000 00:12:22.039 Nvme2n1 : 5.06 1580.28 6.17 0.00 0.00 80364.62 6315.29 97708.22 00:12:22.039 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:22.039 Verification LBA range: start 0x80000 length 0x80000 00:12:22.039 Nvme2n1 : 5.07 1440.13 5.63 0.00 0.00 88340.19 19660.80 176351.42 00:12:22.039 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:22.039 Verification LBA range: start 0x0 length 0x80000 00:12:22.039 Nvme2n2 : 5.07 1589.21 6.21 0.00 0.00 79906.43 8340.95 99614.72 00:12:22.039 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:22.039 Verification LBA range: start 0x80000 length 0x80000 00:12:22.039 Nvme2n2 : 5.07 1439.56 5.62 0.00 0.00 88208.98 19184.17 177304.67 00:12:22.039 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:22.039 Verification LBA range: start 0x0 length 0x80000 00:12:22.039 Nvme2n3 : 5.08 1588.71 6.21 0.00 0.00 79763.58 8519.68 101997.85 00:12:22.039 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:22.039 Verification LBA range: start 0x80000 length 0x80000 00:12:22.039 Nvme2n3 : 5.07 1439.04 5.62 0.00 0.00 88070.75 18707.55 184930.68 00:12:22.039 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:22.039 Verification LBA range: start 0x0 length 0x20000 00:12:22.039 Nvme3n1 : 5.08 1588.21 6.20 0.00 0.00 79632.75 8817.57 103427.72 00:12:22.039 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:22.039 Verification LBA range: start 0x20000 length 0x20000 00:12:22.039 Nvme3n1 : 5.07 1438.39 5.62 0.00 0.00 87921.62 10604.92 195416.44 00:12:22.039 =================================================================================================================== 00:12:22.039 Total : 18134.96 70.84 0.00 0.00 84069.82 6315.29 196369.69 00:12:23.414 00:12:23.414 real 0m7.829s 00:12:23.414 user 0m14.202s 00:12:23.414 sys 0m0.278s 00:12:23.414 03:40:38 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:23.414 03:40:38 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:12:23.414 ************************************ 00:12:23.414 END TEST bdev_verify 00:12:23.414 ************************************ 00:12:23.414 03:40:38 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:12:23.414 03:40:38 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:23.414 03:40:38 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:12:23.414 03:40:38 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:23.414 03:40:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:23.414 ************************************ 00:12:23.414 START TEST bdev_verify_big_io 00:12:23.414 ************************************ 00:12:23.414 03:40:38 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:23.414 [2024-07-26 03:40:38.190564] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:12:23.414 [2024-07-26 03:40:38.190773] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67550 ] 00:12:23.672 [2024-07-26 03:40:38.379281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:23.935 [2024-07-26 03:40:38.610620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.935 [2024-07-26 03:40:38.610915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.524 Running I/O for 5 seconds... 00:12:31.081 00:12:31.081 Latency(us) 00:12:31.081 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:31.081 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:31.081 Verification LBA range: start 0x0 length 0xbd0b 00:12:31.081 Nvme0n1 : 5.62 125.36 7.83 0.00 0.00 981702.54 23950.43 1052389.00 00:12:31.081 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:31.081 Verification LBA range: start 0xbd0b length 0xbd0b 00:12:31.081 Nvme0n1 : 5.61 125.44 7.84 0.00 0.00 982011.30 15073.28 1067641.02 00:12:31.081 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:31.081 Verification LBA range: start 0x0 length 0xa000 00:12:31.081 Nvme1n1 : 5.76 127.86 7.99 0.00 0.00 938459.98 86745.83 892242.85 00:12:31.081 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:31.081 Verification LBA range: start 0xa000 length 0xa000 00:12:31.081 Nvme1n1 : 5.81 127.05 7.94 0.00 0.00 927784.71 93895.21 865551.83 00:12:31.081 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:31.081 Verification LBA range: start 0x0 length 0x8000 00:12:31.081 Nvme2n1 : 5.81 132.11 8.26 0.00 0.00 892964.31 52905.43 876990.84 00:12:31.081 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:31.081 Verification LBA range: start 0x8000 length 0x8000 00:12:31.081 Nvme2n1 : 5.89 122.51 7.66 0.00 0.00 933935.68 75306.82 1593835.52 00:12:31.081 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:31.081 Verification LBA range: start 0x0 length 0x8000 00:12:31.081 Nvme2n2 : 5.86 133.88 8.37 0.00 0.00 857971.05 46709.29 1197283.14 00:12:31.081 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:31.081 Verification LBA range: start 0x8000 length 0x8000 00:12:31.081 Nvme2n2 : 5.89 127.38 7.96 0.00 0.00 878795.66 71970.44 1624339.55 00:12:31.081 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:31.081 Verification LBA range: start 0x0 length 0x8000 00:12:31.081 Nvme2n3 : 5.87 136.37 8.52 0.00 0.00 809085.41 47662.55 937998.89 00:12:31.081 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:31.081 Verification LBA range: start 0x8000 length 0x8000 00:12:31.081 Nvme2n3 : 5.97 137.24 8.58 0.00 0.00 799402.15 20375.74 1654843.58 00:12:31.081 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:31.081 Verification LBA range: start 0x0 length 0x2000 00:12:31.081 Nvme3n1 : 5.95 155.84 9.74 0.00 0.00 687796.77 3381.06 983754.94 00:12:31.081 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:31.081 Verification LBA range: start 0x2000 length 0x2000 00:12:31.081 Nvme3n1 : 5.97 147.34 9.21 0.00 0.00 722731.34 1266.04 1700599.62 00:12:31.081 =================================================================================================================== 00:12:31.081 Total : 1598.38 99.90 0.00 0.00 859947.72 1266.04 1700599.62 00:12:32.455 00:12:32.455 real 0m9.095s 00:12:32.455 user 0m16.628s 00:12:32.455 sys 0m0.302s 00:12:32.455 03:40:47 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:32.455 ************************************ 00:12:32.455 03:40:47 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.455 END TEST bdev_verify_big_io 00:12:32.455 ************************************ 00:12:32.455 03:40:47 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:12:32.455 03:40:47 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:32.455 03:40:47 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:12:32.455 03:40:47 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:32.456 03:40:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:32.456 ************************************ 00:12:32.456 START TEST bdev_write_zeroes 00:12:32.456 ************************************ 00:12:32.456 03:40:47 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:32.456 [2024-07-26 03:40:47.320670] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:12:32.456 [2024-07-26 03:40:47.320851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67670 ] 00:12:32.714 [2024-07-26 03:40:47.505247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.972 [2024-07-26 03:40:47.705223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.538 Running I/O for 1 seconds... 00:12:34.911 00:12:34.911 Latency(us) 00:12:34.911 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:34.911 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:34.911 Nvme0n1 : 1.02 6730.81 26.29 0.00 0.00 18933.87 12571.00 110100.48 00:12:34.911 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:34.911 Nvme1n1 : 1.02 6961.98 27.20 0.00 0.00 18300.57 12690.15 53620.36 00:12:34.911 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:34.911 Nvme2n1 : 1.02 6890.58 26.92 0.00 0.00 18423.17 12630.57 59578.18 00:12:34.911 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:34.911 Nvme2n2 : 1.02 6881.53 26.88 0.00 0.00 18372.87 12928.47 58386.62 00:12:34.911 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:34.911 Nvme2n3 : 1.03 6928.70 27.07 0.00 0.00 18240.81 7685.59 58148.31 00:12:34.911 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:34.911 Nvme3n1 : 1.03 6919.76 27.03 0.00 0.00 18195.31 7923.90 57433.37 00:12:34.911 =================================================================================================================== 00:12:34.911 Total : 41313.36 161.38 0.00 0.00 18408.11 7685.59 110100.48 00:12:35.873 00:12:35.873 real 0m3.375s 00:12:35.873 user 0m3.030s 00:12:35.873 sys 0m0.214s 00:12:35.873 03:40:50 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:35.873 03:40:50 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:12:35.873 ************************************ 00:12:35.873 END TEST bdev_write_zeroes 00:12:35.873 ************************************ 00:12:35.873 03:40:50 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:12:35.873 03:40:50 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:35.873 03:40:50 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:12:35.873 03:40:50 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:35.873 03:40:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:35.873 ************************************ 00:12:35.873 START TEST bdev_json_nonenclosed 00:12:35.873 ************************************ 00:12:35.873 03:40:50 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:35.873 [2024-07-26 03:40:50.743350] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:12:35.873 [2024-07-26 03:40:50.743546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67723 ] 00:12:36.132 [2024-07-26 03:40:50.907787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.390 [2024-07-26 03:40:51.098053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.390 [2024-07-26 03:40:51.098212] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:12:36.390 [2024-07-26 03:40:51.098263] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:36.390 [2024-07-26 03:40:51.098295] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:36.953 00:12:36.953 real 0m0.914s 00:12:36.953 user 0m0.678s 00:12:36.953 sys 0m0.128s 00:12:36.953 03:40:51 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:12:36.953 03:40:51 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:36.953 ************************************ 00:12:36.953 END TEST bdev_json_nonenclosed 00:12:36.953 03:40:51 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:12:36.953 ************************************ 00:12:36.953 03:40:51 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:12:36.953 03:40:51 blockdev_nvme -- bdev/blockdev.sh@781 -- # true 00:12:36.953 03:40:51 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:36.953 03:40:51 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:12:36.953 03:40:51 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:36.953 03:40:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:36.953 ************************************ 00:12:36.953 START TEST bdev_json_nonarray 00:12:36.953 ************************************ 00:12:36.953 03:40:51 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:36.953 [2024-07-26 03:40:51.723833] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:12:36.953 [2024-07-26 03:40:51.723997] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67753 ] 00:12:37.210 [2024-07-26 03:40:51.887626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.210 [2024-07-26 03:40:52.076293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.210 [2024-07-26 03:40:52.076428] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:12:37.210 [2024-07-26 03:40:52.076460] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:37.210 [2024-07-26 03:40:52.076478] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:37.775 00:12:37.775 real 0m0.962s 00:12:37.775 user 0m0.712s 00:12:37.775 sys 0m0.140s 00:12:37.775 03:40:52 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:12:37.775 03:40:52 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:37.775 03:40:52 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:12:37.775 ************************************ 00:12:37.775 END TEST bdev_json_nonarray 00:12:37.775 ************************************ 00:12:37.775 03:40:52 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:12:37.775 03:40:52 blockdev_nvme -- bdev/blockdev.sh@784 -- # true 00:12:37.775 03:40:52 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:12:37.775 03:40:52 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:12:37.775 03:40:52 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:12:37.775 03:40:52 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:12:37.775 03:40:52 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:12:37.775 03:40:52 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:12:37.775 03:40:52 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:37.775 03:40:52 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:12:37.775 03:40:52 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:12:37.775 03:40:52 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:12:37.775 03:40:52 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:12:37.775 00:12:37.775 real 0m45.564s 00:12:37.775 user 1m8.853s 00:12:37.775 sys 0m6.582s 00:12:37.775 03:40:52 blockdev_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:37.775 03:40:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:37.775 ************************************ 00:12:37.775 END TEST blockdev_nvme 00:12:37.775 ************************************ 00:12:37.775 03:40:52 -- common/autotest_common.sh@1142 -- # return 0 00:12:37.775 03:40:52 -- spdk/autotest.sh@213 -- # uname -s 00:12:37.775 03:40:52 -- spdk/autotest.sh@213 -- # [[ Linux == Linux ]] 00:12:37.775 03:40:52 -- spdk/autotest.sh@214 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:12:37.775 03:40:52 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:37.775 03:40:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:37.775 03:40:52 -- common/autotest_common.sh@10 -- # set +x 00:12:37.775 ************************************ 00:12:37.775 START TEST blockdev_nvme_gpt 00:12:37.775 ************************************ 00:12:37.775 03:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:12:38.034 * Looking for test storage... 00:12:38.034 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:38.034 03:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:38.034 03:40:52 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:12:38.034 03:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:38.034 03:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:38.034 03:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:38.034 03:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:38.034 03:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:38.034 03:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:38.034 03:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:12:38.034 03:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:12:38.034 03:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:12:38.034 03:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:12:38.034 03:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:12:38.034 03:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:12:38.034 03:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:12:38.034 03:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:12:38.034 03:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:12:38.034 03:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:12:38.034 03:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:12:38.034 03:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:12:38.034 03:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:12:38.034 03:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:12:38.034 03:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:12:38.034 03:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:12:38.034 03:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=67829 00:12:38.034 03:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:38.034 03:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:38.034 03:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 67829 00:12:38.034 03:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@829 -- # '[' -z 67829 ']' 00:12:38.034 03:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.034 03:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:38.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.034 03:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.034 03:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:38.034 03:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:38.034 [2024-07-26 03:40:52.889463] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:12:38.034 [2024-07-26 03:40:52.890245] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67829 ] 00:12:38.292 [2024-07-26 03:40:53.049680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.549 [2024-07-26 03:40:53.250298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.480 03:40:54 blockdev_nvme_gpt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:39.480 03:40:54 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # return 0 00:12:39.480 03:40:54 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:12:39.480 03:40:54 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:12:39.480 03:40:54 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:39.480 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:39.748 Waiting for block devices as requested 00:12:39.748 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:39.748 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:40.029 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:40.029 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:45.293 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:45.293 03:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:12:45.293 03:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:12:45.293 03:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:12:45.293 03:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # local nvme bdf 00:12:45.293 03:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:12:45.293 03:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:12:45.293 03:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:12:45.293 03:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:45.293 03:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:12:45.293 03:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:12:45.293 03:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:12:45.293 03:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:12:45.293 03:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:45.293 03:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:12:45.293 03:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:12:45.293 03:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:12:45.293 03:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:12:45.293 03:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:45.293 03:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:12:45.293 03:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:12:45.293 03:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:12:45.293 03:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:12:45.293 03:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:12:45.293 03:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:12:45.293 03:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:12:45.293 03:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:12:45.293 03:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:12:45.293 03:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:12:45.293 03:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:12:45.293 03:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:12:45.293 03:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:12:45.293 03:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:12:45.293 03:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:12:45.293 03:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:12:45.293 03:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:12:45.293 03:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:12:45.293 03:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:12:45.293 03:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:45.293 03:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:12:45.293 03:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:12:45.293 03:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:12:45.293 03:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:12:45.293 03:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:12:45.293 03:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:12:45.293 03:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:12:45.293 03:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:12:45.293 03:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:12:45.293 BYT; 00:12:45.293 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:12:45.293 03:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:12:45.293 BYT; 00:12:45.293 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:12:45.293 03:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:12:45.293 03:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:12:45.293 03:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:12:45.293 03:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:12:45.293 03:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:12:45.293 03:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:12:45.293 03:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:12:45.293 03:40:59 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:12:45.293 03:40:59 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:12:45.293 03:40:59 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:12:45.293 03:40:59 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:12:45.293 03:40:59 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:12:45.293 03:40:59 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:12:45.293 03:40:59 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:12:45.293 03:40:59 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:12:45.293 03:40:59 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:12:45.293 03:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:12:45.293 03:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:12:45.293 03:40:59 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:12:45.293 03:40:59 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:12:45.293 03:40:59 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:12:45.293 03:40:59 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:12:45.293 03:40:59 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:12:45.293 03:40:59 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:12:45.293 03:40:59 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:12:45.293 03:40:59 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:12:45.293 03:40:59 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:12:45.293 03:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:12:45.293 03:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:12:46.227 The operation has completed successfully. 00:12:46.227 03:41:00 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:12:47.167 The operation has completed successfully. 00:12:47.167 03:41:01 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:47.747 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:48.331 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:48.331 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:48.331 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:48.331 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:48.331 03:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:12:48.331 03:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.331 03:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:48.331 [] 00:12:48.331 03:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.331 03:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:12:48.331 03:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:12:48.331 03:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:12:48.331 03:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:48.331 03:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:12:48.331 03:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.331 03:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:48.911 03:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.911 03:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:12:48.911 03:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.911 03:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:48.911 03:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.911 03:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:12:48.911 03:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:12:48.911 03:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.911 03:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:48.911 03:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.911 03:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:12:48.911 03:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.911 03:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:48.911 03:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.911 03:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:48.911 03:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.911 03:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:48.911 03:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.911 03:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:12:48.911 03:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:12:48.911 03:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:12:48.911 03:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.911 03:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:48.911 03:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.911 03:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:12:48.911 03:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:12:48.912 03:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "5035bb4b-6b4c-4ff9-bb52-3198e06fef38"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "5035bb4b-6b4c-4ff9-bb52-3198e06fef38",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "44c0f295-77b9-4427-ac68-b0e96ba71fb4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "44c0f295-77b9-4427-ac68-b0e96ba71fb4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "ebc36956-f044-4c7e-b97a-d26fd63853a3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ebc36956-f044-4c7e-b97a-d26fd63853a3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "b10450bb-0852-45e2-b4f4-ddf82831fa5a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b10450bb-0852-45e2-b4f4-ddf82831fa5a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "41f9e66f-7af1-4087-b9f9-a488fbbe0a62"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "41f9e66f-7af1-4087-b9f9-a488fbbe0a62",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:12:48.912 03:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:12:48.912 03:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:12:48.912 03:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:12:48.912 03:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 67829 00:12:48.912 03:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@948 -- # '[' -z 67829 ']' 00:12:48.912 03:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # kill -0 67829 00:12:48.912 03:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # uname 00:12:48.912 03:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:48.912 03:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67829 00:12:48.912 03:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:48.912 03:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:48.912 03:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67829' 00:12:48.912 killing process with pid 67829 00:12:48.912 03:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@967 -- # kill 67829 00:12:48.912 03:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # wait 67829 00:12:51.444 03:41:05 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:51.444 03:41:05 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:12:51.444 03:41:05 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:12:51.444 03:41:05 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:51.444 03:41:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:51.444 ************************************ 00:12:51.444 START TEST bdev_hello_world 00:12:51.444 ************************************ 00:12:51.444 03:41:05 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:12:51.444 [2024-07-26 03:41:06.104374] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:12:51.444 [2024-07-26 03:41:06.104610] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68463 ] 00:12:51.444 [2024-07-26 03:41:06.273851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.702 [2024-07-26 03:41:06.499175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.268 [2024-07-26 03:41:07.140456] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:52.268 [2024-07-26 03:41:07.140521] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:12:52.268 [2024-07-26 03:41:07.140559] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:52.268 [2024-07-26 03:41:07.143810] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:52.268 [2024-07-26 03:41:07.144279] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:52.268 [2024-07-26 03:41:07.144324] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:52.268 [2024-07-26 03:41:07.144548] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:52.268 00:12:52.268 [2024-07-26 03:41:07.144614] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:53.642 00:12:53.642 real 0m2.316s 00:12:53.642 user 0m1.961s 00:12:53.642 sys 0m0.237s 00:12:53.642 03:41:08 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:53.642 03:41:08 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:53.642 ************************************ 00:12:53.642 END TEST bdev_hello_world 00:12:53.642 ************************************ 00:12:53.642 03:41:08 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:12:53.642 03:41:08 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:12:53.642 03:41:08 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:53.642 03:41:08 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:53.642 03:41:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:53.642 ************************************ 00:12:53.642 START TEST bdev_bounds 00:12:53.642 ************************************ 00:12:53.642 03:41:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:12:53.642 03:41:08 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=68510 00:12:53.642 Process bdevio pid: 68510 00:12:53.642 03:41:08 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:53.642 03:41:08 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 68510' 00:12:53.642 03:41:08 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:53.642 03:41:08 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 68510 00:12:53.642 03:41:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 68510 ']' 00:12:53.642 03:41:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.642 03:41:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:53.642 03:41:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.642 03:41:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:53.642 03:41:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:53.642 [2024-07-26 03:41:08.461342] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:12:53.642 [2024-07-26 03:41:08.461581] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68510 ] 00:12:53.902 [2024-07-26 03:41:08.643451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:54.160 [2024-07-26 03:41:08.868684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.160 [2024-07-26 03:41:08.868767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.160 [2024-07-26 03:41:08.868767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:54.727 03:41:09 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:54.727 03:41:09 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:12:54.727 03:41:09 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:54.985 I/O targets: 00:12:54.985 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:54.985 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:12:54.985 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:12:54.985 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:54.985 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:54.985 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:54.985 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:54.985 00:12:54.985 00:12:54.985 CUnit - A unit testing framework for C - Version 2.1-3 00:12:54.985 http://cunit.sourceforge.net/ 00:12:54.985 00:12:54.985 00:12:54.985 Suite: bdevio tests on: Nvme3n1 00:12:54.985 Test: blockdev write read block ...passed 00:12:54.985 Test: blockdev write zeroes read block ...passed 00:12:54.985 Test: blockdev write zeroes read no split ...passed 00:12:54.985 Test: blockdev write zeroes read split ...passed 00:12:54.985 Test: blockdev write zeroes read split partial ...passed 00:12:54.985 Test: blockdev reset ...[2024-07-26 03:41:09.769855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:12:54.985 [2024-07-26 03:41:09.774691] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:54.985 passed 00:12:54.985 Test: blockdev write read 8 blocks ...passed 00:12:54.985 Test: blockdev write read size > 128k ...passed 00:12:54.985 Test: blockdev write read invalid size ...passed 00:12:54.985 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:54.985 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:54.985 Test: blockdev write read max offset ...passed 00:12:54.985 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:54.985 Test: blockdev writev readv 8 blocks ...passed 00:12:54.985 Test: blockdev writev readv 30 x 1block ...passed 00:12:54.985 Test: blockdev writev readv block ...passed 00:12:54.985 Test: blockdev writev readv size > 128k ...passed 00:12:54.985 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:54.985 Test: blockdev comparev and writev ...[2024-07-26 03:41:09.783404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26d806000 len:0x1000 00:12:54.985 [2024-07-26 03:41:09.783498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:54.985 passed 00:12:54.985 Test: blockdev nvme passthru rw ...passed 00:12:54.985 Test: blockdev nvme passthru vendor specific ...[2024-07-26 03:41:09.784330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:54.985 [2024-07-26 03:41:09.784386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:54.985 passed 00:12:54.985 Test: blockdev nvme admin passthru ...passed 00:12:54.985 Test: blockdev copy ...passed 00:12:54.985 Suite: bdevio tests on: Nvme2n3 00:12:54.985 Test: blockdev write read block ...passed 00:12:54.985 Test: blockdev write zeroes read block ...passed 00:12:54.985 Test: blockdev write zeroes read no split ...passed 00:12:54.985 Test: blockdev write zeroes read split ...passed 00:12:54.985 Test: blockdev write zeroes read split partial ...passed 00:12:54.985 Test: blockdev reset ...[2024-07-26 03:41:09.868686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:12:54.985 passed 00:12:54.985 Test: blockdev write read 8 blocks ...[2024-07-26 03:41:09.873655] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:54.985 passed 00:12:54.985 Test: blockdev write read size > 128k ...passed 00:12:54.985 Test: blockdev write read invalid size ...passed 00:12:54.985 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:54.985 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:54.985 Test: blockdev write read max offset ...passed 00:12:54.985 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:54.986 Test: blockdev writev readv 8 blocks ...passed 00:12:54.986 Test: blockdev writev readv 30 x 1block ...passed 00:12:54.986 Test: blockdev writev readv block ...passed 00:12:54.986 Test: blockdev writev readv size > 128k ...passed 00:12:54.986 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:54.986 Test: blockdev comparev and writev ...[2024-07-26 03:41:09.882054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27ce3c000 len:0x1000 00:12:54.986 [2024-07-26 03:41:09.882132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:54.986 passed 00:12:54.986 Test: blockdev nvme passthru rw ...passed 00:12:54.986 Test: blockdev nvme passthru vendor specific ...[2024-07-26 03:41:09.882920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:54.986 [2024-07-26 03:41:09.882968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:54.986 passed 00:12:55.243 Test: blockdev nvme admin passthru ...passed 00:12:55.243 Test: blockdev copy ...passed 00:12:55.243 Suite: bdevio tests on: Nvme2n2 00:12:55.243 Test: blockdev write read block ...passed 00:12:55.243 Test: blockdev write zeroes read block ...passed 00:12:55.243 Test: blockdev write zeroes read no split ...passed 00:12:55.244 Test: blockdev write zeroes read split ...passed 00:12:55.244 Test: blockdev write zeroes read split partial ...passed 00:12:55.244 Test: blockdev reset ...[2024-07-26 03:41:09.966334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:12:55.244 [2024-07-26 03:41:09.970865] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:55.244 passed 00:12:55.244 Test: blockdev write read 8 blocks ...passed 00:12:55.244 Test: blockdev write read size > 128k ...passed 00:12:55.244 Test: blockdev write read invalid size ...passed 00:12:55.244 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:55.244 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:55.244 Test: blockdev write read max offset ...passed 00:12:55.244 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:55.244 Test: blockdev writev readv 8 blocks ...passed 00:12:55.244 Test: blockdev writev readv 30 x 1block ...passed 00:12:55.244 Test: blockdev writev readv block ...passed 00:12:55.244 Test: blockdev writev readv size > 128k ...passed 00:12:55.244 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:55.244 Test: blockdev comparev and writev ...[2024-07-26 03:41:09.980614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27ce36000 len:0x1000 00:12:55.244 [2024-07-26 03:41:09.980701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:55.244 passed 00:12:55.244 Test: blockdev nvme passthru rw ...passed 00:12:55.244 Test: blockdev nvme passthru vendor specific ...[2024-07-26 03:41:09.981584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:55.244 [2024-07-26 03:41:09.981629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:55.244 passed 00:12:55.244 Test: blockdev nvme admin passthru ...passed 00:12:55.244 Test: blockdev copy ...passed 00:12:55.244 Suite: bdevio tests on: Nvme2n1 00:12:55.244 Test: blockdev write read block ...passed 00:12:55.244 Test: blockdev write zeroes read block ...passed 00:12:55.244 Test: blockdev write zeroes read no split ...passed 00:12:55.244 Test: blockdev write zeroes read split ...passed 00:12:55.244 Test: blockdev write zeroes read split partial ...passed 00:12:55.244 Test: blockdev reset ...[2024-07-26 03:41:10.067484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:12:55.244 [2024-07-26 03:41:10.071902] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:55.244 passed 00:12:55.244 Test: blockdev write read 8 blocks ...passed 00:12:55.244 Test: blockdev write read size > 128k ...passed 00:12:55.244 Test: blockdev write read invalid size ...passed 00:12:55.244 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:55.244 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:55.244 Test: blockdev write read max offset ...passed 00:12:55.244 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:55.244 Test: blockdev writev readv 8 blocks ...passed 00:12:55.244 Test: blockdev writev readv 30 x 1block ...passed 00:12:55.244 Test: blockdev writev readv block ...passed 00:12:55.244 Test: blockdev writev readv size > 128k ...passed 00:12:55.244 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:55.244 Test: blockdev comparev and writev ...[2024-07-26 03:41:10.081835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27ce32000 len:0x1000 00:12:55.244 [2024-07-26 03:41:10.081939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:55.244 passed 00:12:55.244 Test: blockdev nvme passthru rw ...passed 00:12:55.244 Test: blockdev nvme passthru vendor specific ...passed 00:12:55.244 Test: blockdev nvme admin passthru ...[2024-07-26 03:41:10.083100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:55.244 [2024-07-26 03:41:10.083185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:55.244 passed 00:12:55.244 Test: blockdev copy ...passed 00:12:55.244 Suite: bdevio tests on: Nvme1n1p2 00:12:55.244 Test: blockdev write read block ...passed 00:12:55.244 Test: blockdev write zeroes read block ...passed 00:12:55.244 Test: blockdev write zeroes read no split ...passed 00:12:55.244 Test: blockdev write zeroes read split ...passed 00:12:55.502 Test: blockdev write zeroes read split partial ...passed 00:12:55.502 Test: blockdev reset ...[2024-07-26 03:41:10.156191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:12:55.502 [2024-07-26 03:41:10.160164] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:55.502 passed 00:12:55.502 Test: blockdev write read 8 blocks ...passed 00:12:55.502 Test: blockdev write read size > 128k ...passed 00:12:55.502 Test: blockdev write read invalid size ...passed 00:12:55.502 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:55.502 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:55.502 Test: blockdev write read max offset ...passed 00:12:55.502 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:55.502 Test: blockdev writev readv 8 blocks ...passed 00:12:55.502 Test: blockdev writev readv 30 x 1block ...passed 00:12:55.502 Test: blockdev writev readv block ...passed 00:12:55.502 Test: blockdev writev readv size > 128k ...passed 00:12:55.502 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:55.502 Test: blockdev comparev and writev ...[2024-07-26 03:41:10.169196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x27ce2e000 len:0x1000 00:12:55.502 [2024-07-26 03:41:10.169446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:passed 00:12:55.502 Test: blockdev nvme passthru rw ...passed 00:12:55.502 Test: blockdev nvme passthru vendor specific ...passed 00:12:55.502 Test: blockdev nvme admin passthru ...passed 00:12:55.502 Test: blockdev copy ...0 sqhd:0018 p:1 m:0 dnr:1 00:12:55.502 passed 00:12:55.502 Suite: bdevio tests on: Nvme1n1p1 00:12:55.502 Test: blockdev write read block ...passed 00:12:55.502 Test: blockdev write zeroes read block ...passed 00:12:55.502 Test: blockdev write zeroes read no split ...passed 00:12:55.502 Test: blockdev write zeroes read split ...passed 00:12:55.502 Test: blockdev write zeroes read split partial ...passed 00:12:55.502 Test: blockdev reset ...[2024-07-26 03:41:10.233670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:12:55.502 [2024-07-26 03:41:10.237434] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:55.502 passed 00:12:55.502 Test: blockdev write read 8 blocks ...passed 00:12:55.502 Test: blockdev write read size > 128k ...passed 00:12:55.502 Test: blockdev write read invalid size ...passed 00:12:55.502 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:55.502 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:55.502 Test: blockdev write read max offset ...passed 00:12:55.502 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:55.502 Test: blockdev writev readv 8 blocks ...passed 00:12:55.502 Test: blockdev writev readv 30 x 1block ...passed 00:12:55.502 Test: blockdev writev readv block ...passed 00:12:55.502 Test: blockdev writev readv size > 128k ...passed 00:12:55.502 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:55.502 Test: blockdev comparev and writev ...[2024-07-26 03:41:10.245828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x27440e000 len:0x1000 00:12:55.502 [2024-07-26 03:41:10.245908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:55.502 passed 00:12:55.502 Test: blockdev nvme passthru rw ...passed 00:12:55.502 Test: blockdev nvme passthru vendor specific ...passed 00:12:55.502 Test: blockdev nvme admin passthru ...passed 00:12:55.502 Test: blockdev copy ...passed 00:12:55.502 Suite: bdevio tests on: Nvme0n1 00:12:55.502 Test: blockdev write read block ...passed 00:12:55.502 Test: blockdev write zeroes read block ...passed 00:12:55.502 Test: blockdev write zeroes read no split ...passed 00:12:55.502 Test: blockdev write zeroes read split ...passed 00:12:55.502 Test: blockdev write zeroes read split partial ...passed 00:12:55.502 Test: blockdev reset ...[2024-07-26 03:41:10.317828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:12:55.502 passed 00:12:55.502 Test: blockdev write read 8 blocks ...[2024-07-26 03:41:10.321485] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:55.502 passed 00:12:55.502 Test: blockdev write read size > 128k ...passed 00:12:55.502 Test: blockdev write read invalid size ...passed 00:12:55.502 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:55.502 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:55.502 Test: blockdev write read max offset ...passed 00:12:55.502 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:55.502 Test: blockdev writev readv 8 blocks ...passed 00:12:55.502 Test: blockdev writev readv 30 x 1block ...passed 00:12:55.502 Test: blockdev writev readv block ...passed 00:12:55.502 Test: blockdev writev readv size > 128k ...passed 00:12:55.502 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:55.502 Test: blockdev comparev and writev ...passed 00:12:55.503 Test: blockdev nvme passthru rw ...[2024-07-26 03:41:10.329564] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:12:55.503 separate metadata which is not supported yet. 00:12:55.503 passed 00:12:55.503 Test: blockdev nvme passthru vendor specific ...passed 00:12:55.503 Test: blockdev nvme admin passthru ...[2024-07-26 03:41:10.330104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:12:55.503 [2024-07-26 03:41:10.330170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:12:55.503 passed 00:12:55.503 Test: blockdev copy ...passed 00:12:55.503 00:12:55.503 Run Summary: Type Total Ran Passed Failed Inactive 00:12:55.503 suites 7 7 n/a 0 0 00:12:55.503 tests 161 161 161 0 0 00:12:55.503 asserts 1025 1025 1025 0 n/a 00:12:55.503 00:12:55.503 Elapsed time = 1.800 seconds 00:12:55.503 0 00:12:55.503 03:41:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 68510 00:12:55.503 03:41:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 68510 ']' 00:12:55.503 03:41:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 68510 00:12:55.503 03:41:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:12:55.503 03:41:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:55.503 03:41:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68510 00:12:55.503 03:41:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:55.503 03:41:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:55.503 03:41:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68510' 00:12:55.503 killing process with pid 68510 00:12:55.503 03:41:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@967 -- # kill 68510 00:12:55.503 03:41:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # wait 68510 00:12:56.885 03:41:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:12:56.885 00:12:56.885 real 0m3.146s 00:12:56.885 user 0m7.667s 00:12:56.885 sys 0m0.382s 00:12:56.885 03:41:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:56.885 03:41:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:56.885 ************************************ 00:12:56.885 END TEST bdev_bounds 00:12:56.885 ************************************ 00:12:56.885 03:41:11 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:12:56.885 03:41:11 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:12:56.885 03:41:11 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:56.885 03:41:11 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:56.885 03:41:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:56.885 ************************************ 00:12:56.885 START TEST bdev_nbd 00:12:56.885 ************************************ 00:12:56.885 03:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:12:56.885 03:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:12:56.885 03:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:12:56.885 03:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:56.885 03:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:56.885 03:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:56.885 03:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:12:56.885 03:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:12:56.885 03:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:12:56.885 03:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:56.885 03:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:12:56.885 03:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:12:56.885 03:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:12:56.885 03:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:12:56.885 03:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:56.885 03:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:12:56.885 03:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=68576 00:12:56.885 03:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:56.885 03:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:56.885 03:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 68576 /var/tmp/spdk-nbd.sock 00:12:56.885 03:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 68576 ']' 00:12:56.885 03:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:56.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:56.885 03:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:56.885 03:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:56.885 03:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:56.885 03:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:56.885 [2024-07-26 03:41:11.653468] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:12:56.885 [2024-07-26 03:41:11.653673] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.163 [2024-07-26 03:41:11.833340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.421 [2024-07-26 03:41:12.100075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.989 03:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:57.989 03:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:12:57.989 03:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:12:57.989 03:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:57.989 03:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:57.989 03:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:57.989 03:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:12:57.989 03:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:57.989 03:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:57.989 03:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:57.989 03:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:57.989 03:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:57.989 03:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:57.989 03:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:57.989 03:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:12:58.247 03:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:58.247 03:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:58.247 03:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:58.247 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:58.247 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:58.247 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:58.247 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:58.247 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:58.247 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:58.247 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:58.247 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:58.247 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.247 1+0 records in 00:12:58.247 1+0 records out 00:12:58.247 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000642972 s, 6.4 MB/s 00:12:58.247 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.247 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:58.247 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.247 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:58.247 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:58.247 03:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:58.247 03:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:58.247 03:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:12:58.506 03:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:58.506 03:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:58.506 03:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:58.506 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:58.506 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:58.506 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:58.506 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:58.506 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:58.506 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:58.506 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:58.506 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:58.506 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.506 1+0 records in 00:12:58.506 1+0 records out 00:12:58.506 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000597929 s, 6.9 MB/s 00:12:58.506 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.506 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:58.506 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.506 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:58.506 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:58.506 03:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:58.506 03:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:58.506 03:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:12:59.072 03:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:59.072 03:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:59.072 03:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:59.072 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:12:59.072 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:59.072 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:59.072 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:59.072 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:12:59.072 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:59.072 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:59.072 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:59.072 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:59.072 1+0 records in 00:12:59.072 1+0 records out 00:12:59.072 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000577565 s, 7.1 MB/s 00:12:59.072 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.072 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:59.072 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.072 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:59.072 03:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:59.072 03:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:59.072 03:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:59.072 03:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:12:59.331 03:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:59.331 03:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:59.331 03:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:59.331 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:12:59.331 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:59.331 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:59.331 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:59.331 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:12:59.331 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:59.331 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:59.331 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:59.331 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:59.331 1+0 records in 00:12:59.331 1+0 records out 00:12:59.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000582527 s, 7.0 MB/s 00:12:59.331 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.331 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:59.331 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.331 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:59.331 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:59.331 03:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:59.331 03:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:59.331 03:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:12:59.589 03:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:59.589 03:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:59.589 03:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:59.589 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:12:59.589 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:59.589 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:59.589 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:59.589 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:12:59.589 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:59.589 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:59.589 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:59.589 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:59.589 1+0 records in 00:12:59.589 1+0 records out 00:12:59.589 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000653905 s, 6.3 MB/s 00:12:59.589 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.589 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:59.589 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.589 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:59.589 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:59.589 03:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:59.589 03:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:59.589 03:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:12:59.847 03:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:59.847 03:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:59.847 03:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:59.847 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:12:59.847 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:59.847 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:59.847 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:59.847 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:12:59.847 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:59.847 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:59.847 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:59.847 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:59.847 1+0 records in 00:12:59.847 1+0 records out 00:12:59.847 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000849014 s, 4.8 MB/s 00:12:59.847 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.847 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:59.847 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.847 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:59.847 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:59.847 03:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:59.847 03:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:59.847 03:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:13:00.105 03:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:13:00.105 03:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:13:00.105 03:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:13:00.105 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:13:00.105 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:00.105 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:00.106 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:00.106 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:13:00.106 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:00.106 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:00.106 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:00.106 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.106 1+0 records in 00:13:00.106 1+0 records out 00:13:00.106 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000884654 s, 4.6 MB/s 00:13:00.106 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.106 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:00.106 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.106 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:00.106 03:41:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:00.106 03:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:00.106 03:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:00.106 03:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:00.364 03:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:00.364 { 00:13:00.364 "nbd_device": "/dev/nbd0", 00:13:00.364 "bdev_name": "Nvme0n1" 00:13:00.364 }, 00:13:00.364 { 00:13:00.364 "nbd_device": "/dev/nbd1", 00:13:00.364 "bdev_name": "Nvme1n1p1" 00:13:00.364 }, 00:13:00.364 { 00:13:00.364 "nbd_device": "/dev/nbd2", 00:13:00.364 "bdev_name": "Nvme1n1p2" 00:13:00.364 }, 00:13:00.364 { 00:13:00.364 "nbd_device": "/dev/nbd3", 00:13:00.364 "bdev_name": "Nvme2n1" 00:13:00.364 }, 00:13:00.364 { 00:13:00.364 "nbd_device": "/dev/nbd4", 00:13:00.364 "bdev_name": "Nvme2n2" 00:13:00.364 }, 00:13:00.364 { 00:13:00.364 "nbd_device": "/dev/nbd5", 00:13:00.364 "bdev_name": "Nvme2n3" 00:13:00.364 }, 00:13:00.364 { 00:13:00.364 "nbd_device": "/dev/nbd6", 00:13:00.364 "bdev_name": "Nvme3n1" 00:13:00.364 } 00:13:00.364 ]' 00:13:00.364 03:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:00.364 03:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:00.364 { 00:13:00.364 "nbd_device": "/dev/nbd0", 00:13:00.364 "bdev_name": "Nvme0n1" 00:13:00.364 }, 00:13:00.364 { 00:13:00.364 "nbd_device": "/dev/nbd1", 00:13:00.364 "bdev_name": "Nvme1n1p1" 00:13:00.364 }, 00:13:00.364 { 00:13:00.364 "nbd_device": "/dev/nbd2", 00:13:00.364 "bdev_name": "Nvme1n1p2" 00:13:00.364 }, 00:13:00.364 { 00:13:00.364 "nbd_device": "/dev/nbd3", 00:13:00.364 "bdev_name": "Nvme2n1" 00:13:00.364 }, 00:13:00.364 { 00:13:00.364 "nbd_device": "/dev/nbd4", 00:13:00.364 "bdev_name": "Nvme2n2" 00:13:00.364 }, 00:13:00.364 { 00:13:00.364 "nbd_device": "/dev/nbd5", 00:13:00.364 "bdev_name": "Nvme2n3" 00:13:00.364 }, 00:13:00.364 { 00:13:00.364 "nbd_device": "/dev/nbd6", 00:13:00.364 "bdev_name": "Nvme3n1" 00:13:00.364 } 00:13:00.364 ]' 00:13:00.364 03:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:00.622 03:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:13:00.622 03:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:00.622 03:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:13:00.622 03:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:00.622 03:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:00.622 03:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:00.622 03:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:00.880 03:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:00.880 03:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:00.880 03:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:00.880 03:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:00.880 03:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:00.880 03:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:00.880 03:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:00.880 03:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:00.880 03:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:00.880 03:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:01.139 03:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:01.139 03:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:01.139 03:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:01.139 03:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:01.139 03:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:01.139 03:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:01.139 03:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:01.139 03:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:01.139 03:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:01.139 03:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:01.398 03:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:01.398 03:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:01.398 03:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:01.398 03:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:01.398 03:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:01.398 03:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:01.398 03:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:01.398 03:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:01.398 03:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:01.398 03:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:01.658 03:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:01.658 03:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:01.658 03:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:01.658 03:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:01.658 03:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:01.658 03:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:01.658 03:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:01.658 03:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:01.658 03:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:01.658 03:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:01.917 03:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:01.917 03:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:01.917 03:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:01.917 03:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:01.917 03:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:01.917 03:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:01.917 03:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:01.917 03:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:01.917 03:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:01.917 03:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:02.175 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:02.175 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:02.175 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:02.175 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:02.175 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:02.175 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:02.175 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:02.175 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:02.175 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:02.175 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:13:02.740 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:13:02.740 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:13:02.740 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:13:02.740 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:02.740 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:02.740 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:13:02.740 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:02.740 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:02.740 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:02.740 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:02.740 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:02.998 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:02.998 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:02.998 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:02.998 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:02.998 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:02.998 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:02.998 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:02.998 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:02.998 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:02.998 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:02.998 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:02.998 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:02.998 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:13:02.998 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:02.998 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:02.998 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:02.998 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:13:02.998 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:02.998 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:13:02.998 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:02.998 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:02.998 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:02.998 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:13:02.998 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:02.998 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:02.998 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:02.998 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:02.998 03:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:13:03.256 /dev/nbd0 00:13:03.256 03:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:03.256 03:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:03.256 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:13:03.256 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:03.256 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:03.256 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:03.256 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:13:03.257 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:03.257 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:03.257 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:03.257 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:03.257 1+0 records in 00:13:03.257 1+0 records out 00:13:03.257 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000674342 s, 6.1 MB/s 00:13:03.257 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.257 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:03.257 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.257 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:03.257 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:03.257 03:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:03.257 03:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:03.257 03:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:13:03.522 /dev/nbd1 00:13:03.523 03:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:03.523 03:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:03.523 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:13:03.523 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:03.523 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:03.523 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:03.523 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:13:03.523 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:03.523 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:03.523 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:03.523 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:03.523 1+0 records in 00:13:03.523 1+0 records out 00:13:03.523 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000681516 s, 6.0 MB/s 00:13:03.523 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.523 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:03.523 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.523 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:03.523 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:03.523 03:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:03.523 03:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:03.523 03:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:13:03.781 /dev/nbd10 00:13:03.781 03:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:03.781 03:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:03.781 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:13:03.781 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:03.781 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:03.781 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:03.781 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:13:03.781 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:03.781 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:03.781 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:03.781 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:03.781 1+0 records in 00:13:03.781 1+0 records out 00:13:03.781 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000663947 s, 6.2 MB/s 00:13:03.781 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.039 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:04.039 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.039 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:04.039 03:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:04.039 03:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:04.039 03:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:04.039 03:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:13:04.297 /dev/nbd11 00:13:04.297 03:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:04.297 03:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:04.297 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:13:04.297 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:04.297 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:04.297 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:04.297 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:13:04.297 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:04.297 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:04.297 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:04.297 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:04.297 1+0 records in 00:13:04.297 1+0 records out 00:13:04.297 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000658385 s, 6.2 MB/s 00:13:04.297 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.297 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:04.297 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.297 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:04.297 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:04.297 03:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:04.297 03:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:04.297 03:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:13:04.555 /dev/nbd12 00:13:04.555 03:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:04.555 03:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:04.555 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:13:04.555 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:04.555 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:04.555 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:04.555 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:13:04.555 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:04.555 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:04.555 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:04.555 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:04.555 1+0 records in 00:13:04.555 1+0 records out 00:13:04.555 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00158962 s, 2.6 MB/s 00:13:04.555 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.555 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:04.555 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.555 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:04.555 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:04.555 03:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:04.555 03:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:04.555 03:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:13:04.814 /dev/nbd13 00:13:04.814 03:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:04.814 03:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:04.814 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:13:04.814 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:04.814 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:04.814 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:04.814 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:13:04.814 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:04.814 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:04.814 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:04.814 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:04.814 1+0 records in 00:13:04.814 1+0 records out 00:13:04.814 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000521543 s, 7.9 MB/s 00:13:04.814 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.814 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:04.814 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.814 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:04.814 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:04.814 03:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:04.814 03:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:04.814 03:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:13:05.072 /dev/nbd14 00:13:05.072 03:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:13:05.072 03:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:13:05.072 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:13:05.072 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:05.072 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:05.072 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:05.072 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:13:05.072 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:05.072 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:05.072 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:05.072 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.072 1+0 records in 00:13:05.072 1+0 records out 00:13:05.072 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00060012 s, 6.8 MB/s 00:13:05.072 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.072 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:05.072 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.072 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:05.072 03:41:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:05.072 03:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:05.072 03:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:05.072 03:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:05.072 03:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:05.072 03:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:05.638 03:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:05.638 { 00:13:05.638 "nbd_device": "/dev/nbd0", 00:13:05.638 "bdev_name": "Nvme0n1" 00:13:05.638 }, 00:13:05.638 { 00:13:05.638 "nbd_device": "/dev/nbd1", 00:13:05.638 "bdev_name": "Nvme1n1p1" 00:13:05.638 }, 00:13:05.638 { 00:13:05.638 "nbd_device": "/dev/nbd10", 00:13:05.638 "bdev_name": "Nvme1n1p2" 00:13:05.638 }, 00:13:05.638 { 00:13:05.638 "nbd_device": "/dev/nbd11", 00:13:05.638 "bdev_name": "Nvme2n1" 00:13:05.638 }, 00:13:05.638 { 00:13:05.638 "nbd_device": "/dev/nbd12", 00:13:05.638 "bdev_name": "Nvme2n2" 00:13:05.638 }, 00:13:05.638 { 00:13:05.638 "nbd_device": "/dev/nbd13", 00:13:05.638 "bdev_name": "Nvme2n3" 00:13:05.638 }, 00:13:05.638 { 00:13:05.638 "nbd_device": "/dev/nbd14", 00:13:05.638 "bdev_name": "Nvme3n1" 00:13:05.638 } 00:13:05.638 ]' 00:13:05.638 03:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:05.638 03:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:05.638 { 00:13:05.638 "nbd_device": "/dev/nbd0", 00:13:05.638 "bdev_name": "Nvme0n1" 00:13:05.638 }, 00:13:05.638 { 00:13:05.638 "nbd_device": "/dev/nbd1", 00:13:05.638 "bdev_name": "Nvme1n1p1" 00:13:05.638 }, 00:13:05.638 { 00:13:05.638 "nbd_device": "/dev/nbd10", 00:13:05.638 "bdev_name": "Nvme1n1p2" 00:13:05.638 }, 00:13:05.638 { 00:13:05.638 "nbd_device": "/dev/nbd11", 00:13:05.638 "bdev_name": "Nvme2n1" 00:13:05.638 }, 00:13:05.638 { 00:13:05.638 "nbd_device": "/dev/nbd12", 00:13:05.638 "bdev_name": "Nvme2n2" 00:13:05.638 }, 00:13:05.638 { 00:13:05.638 "nbd_device": "/dev/nbd13", 00:13:05.638 "bdev_name": "Nvme2n3" 00:13:05.638 }, 00:13:05.638 { 00:13:05.638 "nbd_device": "/dev/nbd14", 00:13:05.638 "bdev_name": "Nvme3n1" 00:13:05.638 } 00:13:05.638 ]' 00:13:05.638 03:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:05.638 /dev/nbd1 00:13:05.638 /dev/nbd10 00:13:05.638 /dev/nbd11 00:13:05.638 /dev/nbd12 00:13:05.638 /dev/nbd13 00:13:05.638 /dev/nbd14' 00:13:05.638 03:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:05.638 /dev/nbd1 00:13:05.638 /dev/nbd10 00:13:05.638 /dev/nbd11 00:13:05.638 /dev/nbd12 00:13:05.638 /dev/nbd13 00:13:05.638 /dev/nbd14' 00:13:05.638 03:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:05.638 03:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:13:05.638 03:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:13:05.638 03:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:13:05.638 03:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:13:05.638 03:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:13:05.638 03:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:13:05.638 03:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:05.638 03:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:05.638 03:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:05.638 03:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:05.638 03:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:05.638 256+0 records in 00:13:05.638 256+0 records out 00:13:05.638 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00692797 s, 151 MB/s 00:13:05.638 03:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:05.638 03:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:05.638 256+0 records in 00:13:05.638 256+0 records out 00:13:05.638 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124726 s, 8.4 MB/s 00:13:05.638 03:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:05.638 03:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:05.897 256+0 records in 00:13:05.897 256+0 records out 00:13:05.897 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.166864 s, 6.3 MB/s 00:13:05.897 03:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:05.897 03:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:05.897 256+0 records in 00:13:05.897 256+0 records out 00:13:05.897 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144241 s, 7.3 MB/s 00:13:05.897 03:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:05.897 03:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:06.155 256+0 records in 00:13:06.155 256+0 records out 00:13:06.155 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139058 s, 7.5 MB/s 00:13:06.155 03:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:06.155 03:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:06.155 256+0 records in 00:13:06.155 256+0 records out 00:13:06.155 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133825 s, 7.8 MB/s 00:13:06.155 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:06.155 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:06.413 256+0 records in 00:13:06.413 256+0 records out 00:13:06.413 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13837 s, 7.6 MB/s 00:13:06.413 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:06.413 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:13:06.671 256+0 records in 00:13:06.671 256+0 records out 00:13:06.671 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.173478 s, 6.0 MB/s 00:13:06.671 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:13:06.671 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:13:06.671 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:06.671 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:06.671 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:06.671 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:06.671 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:06.671 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:06.671 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:06.671 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:06.671 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:06.671 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:06.671 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:06.671 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:06.671 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:06.671 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:06.671 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:06.671 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:06.671 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:06.671 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:06.671 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:13:06.671 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:06.671 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:13:06.671 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:06.671 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:13:06.671 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:06.671 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:06.671 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.671 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:06.929 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:06.929 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:06.929 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:06.929 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.929 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.929 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:06.929 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:06.929 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.929 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.929 03:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:07.198 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:07.198 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:07.199 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:07.199 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.199 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.199 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:07.199 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:07.199 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.199 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.199 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:07.481 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:07.481 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:07.481 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:07.481 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.481 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.481 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:07.481 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:07.481 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.481 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.481 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:07.739 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:07.739 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:07.739 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:07.739 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.739 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.739 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:07.739 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:07.739 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.739 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.739 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:08.304 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:08.304 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:08.304 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:08.304 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:08.304 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:08.304 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:08.304 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:08.304 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:08.304 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:08.304 03:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:08.561 03:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:08.561 03:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:08.561 03:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:08.561 03:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:08.561 03:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:08.561 03:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:08.561 03:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:08.561 03:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:08.561 03:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:08.561 03:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:13:08.819 03:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:13:08.819 03:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:13:08.819 03:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:13:08.819 03:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:08.819 03:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:08.819 03:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:13:08.819 03:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:08.819 03:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:08.819 03:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:08.819 03:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:08.819 03:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:09.076 03:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:09.076 03:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:09.076 03:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:09.076 03:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:09.076 03:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:09.076 03:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:09.076 03:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:09.076 03:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:09.076 03:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:09.076 03:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:09.076 03:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:09.076 03:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:09.076 03:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:13:09.076 03:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:09.076 03:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:13:09.076 03:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:13:09.076 03:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:13:09.076 03:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:09.334 malloc_lvol_verify 00:13:09.591 03:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:09.848 ab6796b4-3276-4bac-9c75-19d3d913e2c7 00:13:09.848 03:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:10.105 31761684-390d-4009-8a31-83576aa64400 00:13:10.105 03:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:10.362 /dev/nbd0 00:13:10.362 03:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:13:10.362 mke2fs 1.46.5 (30-Dec-2021) 00:13:10.362 Discarding device blocks: 0/4096 done 00:13:10.362 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:10.362 00:13:10.362 Allocating group tables: 0/1 done 00:13:10.362 Writing inode tables: 0/1 done 00:13:10.362 Creating journal (1024 blocks): done 00:13:10.362 Writing superblocks and filesystem accounting information: 0/1 done 00:13:10.362 00:13:10.362 03:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:13:10.362 03:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:10.362 03:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:10.362 03:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:10.362 03:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:10.362 03:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:10.362 03:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.362 03:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:10.620 03:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:10.620 03:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:10.620 03:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:10.620 03:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.620 03:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.620 03:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:10.620 03:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:10.620 03:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.620 03:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:13:10.620 03:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:13:10.620 03:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 68576 00:13:10.620 03:41:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 68576 ']' 00:13:10.620 03:41:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 68576 00:13:10.620 03:41:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:13:10.620 03:41:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:10.620 03:41:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68576 00:13:10.620 03:41:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:10.620 killing process with pid 68576 00:13:10.620 03:41:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:10.620 03:41:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68576' 00:13:10.620 03:41:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@967 -- # kill 68576 00:13:10.620 03:41:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # wait 68576 00:13:11.994 03:41:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:11.994 00:13:11.994 real 0m15.149s 00:13:11.994 user 0m21.885s 00:13:11.994 sys 0m4.811s 00:13:11.994 03:41:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:11.994 ************************************ 00:13:11.994 END TEST bdev_nbd 00:13:11.994 ************************************ 00:13:11.994 03:41:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:11.994 03:41:26 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:13:11.994 03:41:26 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:11.994 03:41:26 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:13:11.994 03:41:26 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:13:11.994 skipping fio tests on NVMe due to multi-ns failures. 00:13:11.994 03:41:26 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:13:11.994 03:41:26 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:11.994 03:41:26 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:11.994 03:41:26 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:13:11.994 03:41:26 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:11.994 03:41:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:11.994 ************************************ 00:13:11.994 START TEST bdev_verify 00:13:11.994 ************************************ 00:13:11.994 03:41:26 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:11.995 [2024-07-26 03:41:26.840802] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:13:11.995 [2024-07-26 03:41:26.841047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69035 ] 00:13:12.252 [2024-07-26 03:41:27.021184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:12.510 [2024-07-26 03:41:27.212508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.510 [2024-07-26 03:41:27.212513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.074 Running I/O for 5 seconds... 00:13:18.369 00:13:18.369 Latency(us) 00:13:18.369 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:18.369 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:18.369 Verification LBA range: start 0x0 length 0xbd0bd 00:13:18.369 Nvme0n1 : 5.09 1207.91 4.72 0.00 0.00 105277.21 20494.89 161099.40 00:13:18.369 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:18.369 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:18.369 Nvme0n1 : 5.06 1239.49 4.84 0.00 0.00 102980.27 18230.92 163959.16 00:13:18.369 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:18.369 Verification LBA range: start 0x0 length 0x4ff80 00:13:18.369 Nvme1n1p1 : 5.09 1206.43 4.71 0.00 0.00 105196.99 23116.33 158239.65 00:13:18.369 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:18.369 Verification LBA range: start 0x4ff80 length 0x4ff80 00:13:18.369 Nvme1n1p1 : 5.06 1239.07 4.84 0.00 0.00 102806.71 19899.11 158239.65 00:13:18.369 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:18.369 Verification LBA range: start 0x0 length 0x4ff7f 00:13:18.369 Nvme1n1p2 : 5.10 1205.56 4.71 0.00 0.00 105055.60 23592.96 155379.90 00:13:18.369 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:18.369 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:13:18.369 Nvme1n1p2 : 5.06 1238.65 4.84 0.00 0.00 102628.08 18230.92 149660.39 00:13:18.369 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:18.369 Verification LBA range: start 0x0 length 0x80000 00:13:18.369 Nvme2n1 : 5.11 1215.61 4.75 0.00 0.00 104451.89 4379.00 153473.40 00:13:18.369 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:18.369 Verification LBA range: start 0x80000 length 0x80000 00:13:18.369 Nvme2n1 : 5.07 1238.26 4.84 0.00 0.00 102454.50 17158.52 143940.89 00:13:18.369 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:18.369 Verification LBA range: start 0x0 length 0x80000 00:13:18.369 Nvme2n2 : 5.11 1215.20 4.75 0.00 0.00 104259.87 4706.68 152520.15 00:13:18.369 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:18.369 Verification LBA range: start 0x80000 length 0x80000 00:13:18.369 Nvme2n2 : 5.07 1237.88 4.84 0.00 0.00 102258.22 16205.27 152520.15 00:13:18.369 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:18.369 Verification LBA range: start 0x0 length 0x80000 00:13:18.369 Nvme2n3 : 5.11 1214.82 4.75 0.00 0.00 104068.47 5242.88 156333.15 00:13:18.369 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:18.369 Verification LBA range: start 0x80000 length 0x80000 00:13:18.369 Nvme2n3 : 5.08 1247.65 4.87 0.00 0.00 101306.79 3798.11 159192.90 00:13:18.369 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:18.369 Verification LBA range: start 0x0 length 0x20000 00:13:18.369 Nvme3n1 : 5.11 1214.46 4.74 0.00 0.00 103882.78 5510.98 162052.65 00:13:18.369 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:18.369 Verification LBA range: start 0x20000 length 0x20000 00:13:18.369 Nvme3n1 : 5.09 1257.90 4.91 0.00 0.00 100364.27 6672.76 164912.41 00:13:18.369 =================================================================================================================== 00:13:18.369 Total : 17178.88 67.10 0.00 0.00 103340.29 3798.11 164912.41 00:13:19.749 00:13:19.749 real 0m7.754s 00:13:19.749 user 0m14.076s 00:13:19.749 sys 0m0.276s 00:13:19.749 ************************************ 00:13:19.749 END TEST bdev_verify 00:13:19.749 ************************************ 00:13:19.749 03:41:34 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:19.749 03:41:34 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:19.749 03:41:34 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:13:19.749 03:41:34 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:19.749 03:41:34 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:13:19.749 03:41:34 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:19.749 03:41:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:19.749 ************************************ 00:13:19.749 START TEST bdev_verify_big_io 00:13:19.749 ************************************ 00:13:19.749 03:41:34 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:19.749 [2024-07-26 03:41:34.613296] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:13:19.749 [2024-07-26 03:41:34.613470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69133 ] 00:13:20.007 [2024-07-26 03:41:34.786717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:20.265 [2024-07-26 03:41:34.988850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.265 [2024-07-26 03:41:34.988850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.199 Running I/O for 5 seconds... 00:13:27.757 00:13:27.757 Latency(us) 00:13:27.757 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.757 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:27.757 Verification LBA range: start 0x0 length 0xbd0b 00:13:27.757 Nvme0n1 : 5.88 84.31 5.27 0.00 0.00 1444783.64 34078.72 2104778.01 00:13:27.757 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:27.757 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:27.757 Nvme0n1 : 5.77 110.92 6.93 0.00 0.00 1109892.75 17515.99 1166779.11 00:13:27.757 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:27.757 Verification LBA range: start 0x0 length 0x4ff8 00:13:27.757 Nvme1n1p1 : 5.76 110.69 6.92 0.00 0.00 1086521.89 91512.09 1021884.97 00:13:27.758 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:27.758 Verification LBA range: start 0x4ff8 length 0x4ff8 00:13:27.758 Nvme1n1p1 : 5.89 108.58 6.79 0.00 0.00 1076552.42 101044.60 983754.94 00:13:27.758 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:27.758 Verification LBA range: start 0x0 length 0x4ff7 00:13:27.758 Nvme1n1p2 : 5.89 113.83 7.11 0.00 0.00 1033337.50 117726.49 945624.90 00:13:27.758 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:27.758 Verification LBA range: start 0x4ff7 length 0x4ff7 00:13:27.758 Nvme1n1p2 : 5.90 112.26 7.02 0.00 0.00 1024785.77 122016.12 972315.93 00:13:27.758 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:27.758 Verification LBA range: start 0x0 length 0x8000 00:13:27.758 Nvme2n1 : 5.89 113.61 7.10 0.00 0.00 1003186.51 116296.61 960876.92 00:13:27.758 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:27.758 Verification LBA range: start 0x8000 length 0x8000 00:13:27.758 Nvme2n1 : 5.98 109.52 6.85 0.00 0.00 1030348.17 80073.08 1982761.89 00:13:27.758 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:27.758 Verification LBA range: start 0x0 length 0x8000 00:13:27.758 Nvme2n2 : 5.95 118.29 7.39 0.00 0.00 940329.04 60054.81 937998.89 00:13:27.758 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:27.758 Verification LBA range: start 0x8000 length 0x8000 00:13:27.758 Nvme2n2 : 6.03 114.05 7.13 0.00 0.00 966137.12 47424.23 2028517.93 00:13:27.758 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:27.758 Verification LBA range: start 0x0 length 0x8000 00:13:27.758 Nvme2n3 : 6.04 127.23 7.95 0.00 0.00 852577.44 48615.80 968502.92 00:13:27.758 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:27.758 Verification LBA range: start 0x8000 length 0x8000 00:13:27.758 Nvme2n3 : 6.07 118.80 7.43 0.00 0.00 899908.98 14298.76 2059021.96 00:13:27.758 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:27.758 Verification LBA range: start 0x0 length 0x2000 00:13:27.758 Nvme3n1 : 6.05 137.50 8.59 0.00 0.00 767634.13 6494.02 999006.95 00:13:27.758 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:27.758 Verification LBA range: start 0x2000 length 0x2000 00:13:27.758 Nvme3n1 : 6.08 133.35 8.33 0.00 0.00 782798.00 1884.16 1860745.77 00:13:27.758 =================================================================================================================== 00:13:27.758 Total : 1612.96 100.81 0.00 0.00 982876.93 1884.16 2104778.01 00:13:29.132 00:13:29.132 real 0m9.256s 00:13:29.132 user 0m17.066s 00:13:29.132 sys 0m0.299s 00:13:29.132 03:41:43 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:29.132 03:41:43 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.132 ************************************ 00:13:29.132 END TEST bdev_verify_big_io 00:13:29.132 ************************************ 00:13:29.132 03:41:43 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:13:29.132 03:41:43 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:29.132 03:41:43 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:13:29.132 03:41:43 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:29.132 03:41:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:29.132 ************************************ 00:13:29.132 START TEST bdev_write_zeroes 00:13:29.132 ************************************ 00:13:29.132 03:41:43 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:29.132 [2024-07-26 03:41:43.924642] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:13:29.132 [2024-07-26 03:41:43.924833] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69248 ] 00:13:29.391 [2024-07-26 03:41:44.091207] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.649 [2024-07-26 03:41:44.308793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.287 Running I/O for 1 seconds... 00:13:31.222 00:13:31.222 Latency(us) 00:13:31.222 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:31.222 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:31.222 Nvme0n1 : 1.03 4779.64 18.67 0.00 0.00 26579.59 15609.48 42419.67 00:13:31.222 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:31.222 Nvme1n1p1 : 1.04 4760.23 18.59 0.00 0.00 26595.63 15371.17 44087.85 00:13:31.222 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:31.222 Nvme1n1p2 : 1.04 4742.44 18.53 0.00 0.00 26565.96 15371.17 44326.17 00:13:31.222 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:31.222 Nvme2n1 : 1.04 4724.72 18.46 0.00 0.00 26578.65 15609.48 44326.17 00:13:31.222 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:31.222 Nvme2n2 : 1.05 4756.22 18.58 0.00 0.00 26520.13 15847.80 42896.29 00:13:31.222 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:31.222 Nvme2n3 : 1.05 4737.20 18.50 0.00 0.00 26514.00 15966.95 42419.67 00:13:31.222 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:31.222 Nvme3n1 : 1.06 4718.10 18.43 0.00 0.00 26503.28 15847.80 42657.98 00:13:31.222 =================================================================================================================== 00:13:31.222 Total : 33218.54 129.76 0.00 0.00 26550.82 15371.17 44326.17 00:13:32.596 00:13:32.596 real 0m3.444s 00:13:32.596 user 0m3.070s 00:13:32.596 sys 0m0.241s 00:13:32.596 03:41:47 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:32.596 03:41:47 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:32.596 ************************************ 00:13:32.596 END TEST bdev_write_zeroes 00:13:32.596 ************************************ 00:13:32.596 03:41:47 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:13:32.596 03:41:47 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:32.596 03:41:47 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:13:32.596 03:41:47 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:32.596 03:41:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:32.596 ************************************ 00:13:32.596 START TEST bdev_json_nonenclosed 00:13:32.596 ************************************ 00:13:32.596 03:41:47 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:32.596 [2024-07-26 03:41:47.452329] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:13:32.596 [2024-07-26 03:41:47.452516] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69311 ] 00:13:32.853 [2024-07-26 03:41:47.621928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.112 [2024-07-26 03:41:47.810812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.112 [2024-07-26 03:41:47.810945] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:33.112 [2024-07-26 03:41:47.810977] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:33.112 [2024-07-26 03:41:47.810995] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:33.371 00:13:33.371 real 0m0.922s 00:13:33.371 user 0m0.661s 00:13:33.371 sys 0m0.152s 00:13:33.371 03:41:48 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:13:33.371 03:41:48 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:33.371 03:41:48 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:33.371 ************************************ 00:13:33.371 END TEST bdev_json_nonenclosed 00:13:33.371 ************************************ 00:13:33.371 03:41:48 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 234 00:13:33.371 03:41:48 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # true 00:13:33.371 03:41:48 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:33.371 03:41:48 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:13:33.371 03:41:48 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:33.371 03:41:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:33.664 ************************************ 00:13:33.664 START TEST bdev_json_nonarray 00:13:33.664 ************************************ 00:13:33.664 03:41:48 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:33.664 [2024-07-26 03:41:48.377509] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:13:33.664 [2024-07-26 03:41:48.377729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69338 ] 00:13:33.664 [2024-07-26 03:41:48.544264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.923 [2024-07-26 03:41:48.734229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.923 [2024-07-26 03:41:48.734353] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:33.923 [2024-07-26 03:41:48.734602] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:33.923 [2024-07-26 03:41:48.734626] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:34.488 00:13:34.488 real 0m0.878s 00:13:34.488 user 0m0.633s 00:13:34.488 sys 0m0.137s 00:13:34.488 03:41:49 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:13:34.488 03:41:49 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:34.488 ************************************ 00:13:34.488 END TEST bdev_json_nonarray 00:13:34.488 ************************************ 00:13:34.488 03:41:49 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:34.488 03:41:49 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 234 00:13:34.488 03:41:49 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # true 00:13:34.488 03:41:49 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:13:34.488 03:41:49 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:13:34.488 03:41:49 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:13:34.488 03:41:49 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:34.488 03:41:49 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:34.488 03:41:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:34.488 ************************************ 00:13:34.488 START TEST bdev_gpt_uuid 00:13:34.488 ************************************ 00:13:34.488 03:41:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1123 -- # bdev_gpt_uuid 00:13:34.488 03:41:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:13:34.489 03:41:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:13:34.489 03:41:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69369 00:13:34.489 03:41:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:34.489 03:41:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 69369 00:13:34.489 03:41:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:34.489 03:41:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@829 -- # '[' -z 69369 ']' 00:13:34.489 03:41:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.489 03:41:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:34.489 03:41:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.489 03:41:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:34.489 03:41:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:34.489 [2024-07-26 03:41:49.309699] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:13:34.489 [2024-07-26 03:41:49.309919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69369 ] 00:13:34.747 [2024-07-26 03:41:49.471807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.005 [2024-07-26 03:41:49.662131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.571 03:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:35.571 03:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # return 0 00:13:35.571 03:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:35.571 03:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.571 03:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:35.829 Some configs were skipped because the RPC state that can call them passed over. 00:13:35.829 03:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.829 03:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:13:35.829 03:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.829 03:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:35.829 03:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.829 03:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:13:35.829 03:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.829 03:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:35.829 03:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.829 03:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:13:35.829 { 00:13:35.829 "name": "Nvme1n1p1", 00:13:35.829 "aliases": [ 00:13:35.829 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:13:35.829 ], 00:13:35.829 "product_name": "GPT Disk", 00:13:35.829 "block_size": 4096, 00:13:35.829 "num_blocks": 655104, 00:13:35.829 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:13:35.829 "assigned_rate_limits": { 00:13:35.829 "rw_ios_per_sec": 0, 00:13:35.829 "rw_mbytes_per_sec": 0, 00:13:35.829 "r_mbytes_per_sec": 0, 00:13:35.829 "w_mbytes_per_sec": 0 00:13:35.829 }, 00:13:35.829 "claimed": false, 00:13:35.829 "zoned": false, 00:13:35.829 "supported_io_types": { 00:13:35.829 "read": true, 00:13:35.829 "write": true, 00:13:35.829 "unmap": true, 00:13:35.829 "flush": true, 00:13:35.829 "reset": true, 00:13:35.829 "nvme_admin": false, 00:13:35.829 "nvme_io": false, 00:13:35.829 "nvme_io_md": false, 00:13:35.829 "write_zeroes": true, 00:13:35.829 "zcopy": false, 00:13:35.829 "get_zone_info": false, 00:13:35.829 "zone_management": false, 00:13:35.829 "zone_append": false, 00:13:35.829 "compare": true, 00:13:35.829 "compare_and_write": false, 00:13:35.829 "abort": true, 00:13:35.829 "seek_hole": false, 00:13:35.829 "seek_data": false, 00:13:35.829 "copy": true, 00:13:35.829 "nvme_iov_md": false 00:13:35.829 }, 00:13:35.829 "driver_specific": { 00:13:35.829 "gpt": { 00:13:35.829 "base_bdev": "Nvme1n1", 00:13:35.829 "offset_blocks": 256, 00:13:35.829 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:13:35.829 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:13:35.829 "partition_name": "SPDK_TEST_first" 00:13:35.829 } 00:13:35.829 } 00:13:35.829 } 00:13:35.829 ]' 00:13:35.830 03:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:13:36.088 03:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:13:36.088 03:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:13:36.088 03:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:13:36.088 03:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:13:36.088 03:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:13:36.088 03:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:13:36.088 03:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.088 03:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:36.088 03:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.088 03:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:13:36.088 { 00:13:36.088 "name": "Nvme1n1p2", 00:13:36.088 "aliases": [ 00:13:36.088 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:13:36.088 ], 00:13:36.088 "product_name": "GPT Disk", 00:13:36.088 "block_size": 4096, 00:13:36.088 "num_blocks": 655103, 00:13:36.088 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:13:36.088 "assigned_rate_limits": { 00:13:36.088 "rw_ios_per_sec": 0, 00:13:36.088 "rw_mbytes_per_sec": 0, 00:13:36.088 "r_mbytes_per_sec": 0, 00:13:36.088 "w_mbytes_per_sec": 0 00:13:36.088 }, 00:13:36.088 "claimed": false, 00:13:36.088 "zoned": false, 00:13:36.088 "supported_io_types": { 00:13:36.088 "read": true, 00:13:36.088 "write": true, 00:13:36.088 "unmap": true, 00:13:36.088 "flush": true, 00:13:36.088 "reset": true, 00:13:36.088 "nvme_admin": false, 00:13:36.088 "nvme_io": false, 00:13:36.088 "nvme_io_md": false, 00:13:36.088 "write_zeroes": true, 00:13:36.088 "zcopy": false, 00:13:36.088 "get_zone_info": false, 00:13:36.088 "zone_management": false, 00:13:36.088 "zone_append": false, 00:13:36.088 "compare": true, 00:13:36.088 "compare_and_write": false, 00:13:36.088 "abort": true, 00:13:36.088 "seek_hole": false, 00:13:36.088 "seek_data": false, 00:13:36.088 "copy": true, 00:13:36.088 "nvme_iov_md": false 00:13:36.088 }, 00:13:36.089 "driver_specific": { 00:13:36.089 "gpt": { 00:13:36.089 "base_bdev": "Nvme1n1", 00:13:36.089 "offset_blocks": 655360, 00:13:36.089 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:13:36.089 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:13:36.089 "partition_name": "SPDK_TEST_second" 00:13:36.089 } 00:13:36.089 } 00:13:36.089 } 00:13:36.089 ]' 00:13:36.089 03:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:13:36.089 03:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:13:36.089 03:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:13:36.347 03:41:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:13:36.347 03:41:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:13:36.347 03:41:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:13:36.347 03:41:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 69369 00:13:36.347 03:41:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@948 -- # '[' -z 69369 ']' 00:13:36.347 03:41:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # kill -0 69369 00:13:36.347 03:41:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # uname 00:13:36.347 03:41:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:36.347 03:41:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69369 00:13:36.347 killing process with pid 69369 00:13:36.347 03:41:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:36.348 03:41:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:36.348 03:41:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69369' 00:13:36.348 03:41:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@967 -- # kill 69369 00:13:36.348 03:41:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # wait 69369 00:13:38.878 00:13:38.878 real 0m3.972s 00:13:38.878 user 0m4.391s 00:13:38.878 sys 0m0.421s 00:13:38.878 03:41:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:38.878 03:41:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:38.878 ************************************ 00:13:38.878 END TEST bdev_gpt_uuid 00:13:38.878 ************************************ 00:13:38.878 03:41:53 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:13:38.878 03:41:53 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:13:38.878 03:41:53 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:38.878 03:41:53 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:13:38.878 03:41:53 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:38.878 03:41:53 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:38.878 03:41:53 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:13:38.878 03:41:53 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:13:38.878 03:41:53 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:13:38.878 03:41:53 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:38.878 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:38.878 Waiting for block devices as requested 00:13:38.878 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:39.136 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:39.136 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:39.136 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:44.444 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:44.444 03:41:59 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:13:44.444 03:41:59 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:13:44.444 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:13:44.444 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:13:44.444 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:13:44.444 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:13:44.444 03:41:59 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:13:44.444 00:13:44.444 real 1m6.654s 00:13:44.444 user 1m25.872s 00:13:44.444 sys 0m9.891s 00:13:44.444 03:41:59 blockdev_nvme_gpt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:44.444 ************************************ 00:13:44.444 END TEST blockdev_nvme_gpt 00:13:44.444 ************************************ 00:13:44.444 03:41:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:44.703 03:41:59 -- common/autotest_common.sh@1142 -- # return 0 00:13:44.703 03:41:59 -- spdk/autotest.sh@216 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:13:44.703 03:41:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:44.703 03:41:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:44.703 03:41:59 -- common/autotest_common.sh@10 -- # set +x 00:13:44.703 ************************************ 00:13:44.703 START TEST nvme 00:13:44.703 ************************************ 00:13:44.703 03:41:59 nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:13:44.703 * Looking for test storage... 00:13:44.703 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:44.703 03:41:59 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:45.268 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:45.834 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:45.834 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:45.834 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:45.834 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:45.834 03:42:00 nvme -- nvme/nvme.sh@79 -- # uname 00:13:45.834 Waiting for stub to ready for secondary processes... 00:13:45.834 03:42:00 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:13:45.834 03:42:00 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:13:45.834 03:42:00 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:13:45.834 03:42:00 nvme -- common/autotest_common.sh@1080 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:13:45.834 03:42:00 nvme -- common/autotest_common.sh@1066 -- # _randomize_va_space=2 00:13:45.834 03:42:00 nvme -- common/autotest_common.sh@1067 -- # echo 0 00:13:45.834 03:42:00 nvme -- common/autotest_common.sh@1069 -- # stubpid=70003 00:13:45.834 03:42:00 nvme -- common/autotest_common.sh@1070 -- # echo Waiting for stub to ready for secondary processes... 00:13:45.834 03:42:00 nvme -- common/autotest_common.sh@1068 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:13:45.834 03:42:00 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:13:45.834 03:42:00 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/70003 ]] 00:13:45.834 03:42:00 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:13:45.834 [2024-07-26 03:42:00.728361] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:13:45.834 [2024-07-26 03:42:00.728580] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:13:46.771 [2024-07-26 03:42:01.539077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:46.771 03:42:01 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:13:46.771 03:42:01 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/70003 ]] 00:13:46.771 03:42:01 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:13:47.029 [2024-07-26 03:42:01.768929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:47.029 [2024-07-26 03:42:01.769003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:47.029 [2024-07-26 03:42:01.769007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.029 [2024-07-26 03:42:01.792072] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:13:47.029 [2024-07-26 03:42:01.792167] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:13:47.029 [2024-07-26 03:42:01.801806] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:13:47.029 [2024-07-26 03:42:01.802087] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:13:47.029 [2024-07-26 03:42:01.805031] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:13:47.029 [2024-07-26 03:42:01.805539] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:13:47.029 [2024-07-26 03:42:01.805661] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:13:47.029 [2024-07-26 03:42:01.808709] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:13:47.029 [2024-07-26 03:42:01.809032] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:13:47.029 [2024-07-26 03:42:01.809170] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:13:47.029 [2024-07-26 03:42:01.812296] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:13:47.029 [2024-07-26 03:42:01.812615] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:13:47.029 [2024-07-26 03:42:01.812759] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:13:47.029 [2024-07-26 03:42:01.812897] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:13:47.029 [2024-07-26 03:42:01.813008] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:13:47.962 done. 00:13:47.962 03:42:02 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:13:47.962 03:42:02 nvme -- common/autotest_common.sh@1076 -- # echo done. 00:13:47.962 03:42:02 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:13:47.962 03:42:02 nvme -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:13:47.962 03:42:02 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:47.962 03:42:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:47.962 ************************************ 00:13:47.962 START TEST nvme_reset 00:13:47.962 ************************************ 00:13:47.962 03:42:02 nvme.nvme_reset -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:13:48.220 Initializing NVMe Controllers 00:13:48.220 Skipping QEMU NVMe SSD at 0000:00:10.0 00:13:48.220 Skipping QEMU NVMe SSD at 0000:00:11.0 00:13:48.220 Skipping QEMU NVMe SSD at 0000:00:13.0 00:13:48.220 Skipping QEMU NVMe SSD at 0000:00:12.0 00:13:48.220 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:13:48.220 00:13:48.220 ************************************ 00:13:48.220 END TEST nvme_reset 00:13:48.220 ************************************ 00:13:48.220 real 0m0.337s 00:13:48.220 user 0m0.102s 00:13:48.220 sys 0m0.181s 00:13:48.220 03:42:03 nvme.nvme_reset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:48.220 03:42:03 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:13:48.220 03:42:03 nvme -- common/autotest_common.sh@1142 -- # return 0 00:13:48.220 03:42:03 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:13:48.220 03:42:03 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:48.220 03:42:03 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:48.220 03:42:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:48.220 ************************************ 00:13:48.220 START TEST nvme_identify 00:13:48.220 ************************************ 00:13:48.220 03:42:03 nvme.nvme_identify -- common/autotest_common.sh@1123 -- # nvme_identify 00:13:48.220 03:42:03 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:13:48.220 03:42:03 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:13:48.220 03:42:03 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:13:48.220 03:42:03 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:13:48.220 03:42:03 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=() 00:13:48.220 03:42:03 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # local bdfs 00:13:48.220 03:42:03 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:48.220 03:42:03 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:48.220 03:42:03 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:13:48.220 03:42:03 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:13:48.220 03:42:03 nvme.nvme_identify -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:48.220 03:42:03 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:13:48.787 [2024-07-26 03:42:03.442878] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 70036 terminated unexpected 00:13:48.787 ===================================================== 00:13:48.787 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:48.787 ===================================================== 00:13:48.787 Controller Capabilities/Features 00:13:48.787 ================================ 00:13:48.787 Vendor ID: 1b36 00:13:48.787 Subsystem Vendor ID: 1af4 00:13:48.787 Serial Number: 12340 00:13:48.787 Model Number: QEMU NVMe Ctrl 00:13:48.787 Firmware Version: 8.0.0 00:13:48.787 Recommended Arb Burst: 6 00:13:48.787 IEEE OUI Identifier: 00 54 52 00:13:48.787 Multi-path I/O 00:13:48.787 May have multiple subsystem ports: No 00:13:48.787 May have multiple controllers: No 00:13:48.787 Associated with SR-IOV VF: No 00:13:48.787 Max Data Transfer Size: 524288 00:13:48.788 Max Number of Namespaces: 256 00:13:48.788 Max Number of I/O Queues: 64 00:13:48.788 NVMe Specification Version (VS): 1.4 00:13:48.788 NVMe Specification Version (Identify): 1.4 00:13:48.788 Maximum Queue Entries: 2048 00:13:48.788 Contiguous Queues Required: Yes 00:13:48.788 Arbitration Mechanisms Supported 00:13:48.788 Weighted Round Robin: Not Supported 00:13:48.788 Vendor Specific: Not Supported 00:13:48.788 Reset Timeout: 7500 ms 00:13:48.788 Doorbell Stride: 4 bytes 00:13:48.788 NVM Subsystem Reset: Not Supported 00:13:48.788 Command Sets Supported 00:13:48.788 NVM Command Set: Supported 00:13:48.788 Boot Partition: Not Supported 00:13:48.788 Memory Page Size Minimum: 4096 bytes 00:13:48.788 Memory Page Size Maximum: 65536 bytes 00:13:48.788 Persistent Memory Region: Not Supported 00:13:48.788 Optional Asynchronous Events Supported 00:13:48.788 Namespace Attribute Notices: Supported 00:13:48.788 Firmware Activation Notices: Not Supported 00:13:48.788 ANA Change Notices: Not Supported 00:13:48.788 PLE Aggregate Log Change Notices: Not Supported 00:13:48.788 LBA Status Info Alert Notices: Not Supported 00:13:48.788 EGE Aggregate Log Change Notices: Not Supported 00:13:48.788 Normal NVM Subsystem Shutdown event: Not Supported 00:13:48.788 Zone Descriptor Change Notices: Not Supported 00:13:48.788 Discovery Log Change Notices: Not Supported 00:13:48.788 Controller Attributes 00:13:48.788 128-bit Host Identifier: Not Supported 00:13:48.788 Non-Operational Permissive Mode: Not Supported 00:13:48.788 NVM Sets: Not Supported 00:13:48.788 Read Recovery Levels: Not Supported 00:13:48.788 Endurance Groups: Not Supported 00:13:48.788 Predictable Latency Mode: Not Supported 00:13:48.788 Traffic Based Keep ALive: Not Supported 00:13:48.788 Namespace Granularity: Not Supported 00:13:48.788 SQ Associations: Not Supported 00:13:48.788 UUID List: Not Supported 00:13:48.788 Multi-Domain Subsystem: Not Supported 00:13:48.788 Fixed Capacity Management: Not Supported 00:13:48.788 Variable Capacity Management: Not Supported 00:13:48.788 Delete Endurance Group: Not Supported 00:13:48.788 Delete NVM Set: Not Supported 00:13:48.788 Extended LBA Formats Supported: Supported 00:13:48.788 Flexible Data Placement Supported: Not Supported 00:13:48.788 00:13:48.788 Controller Memory Buffer Support 00:13:48.788 ================================ 00:13:48.788 Supported: No 00:13:48.788 00:13:48.788 Persistent Memory Region Support 00:13:48.788 ================================ 00:13:48.788 Supported: No 00:13:48.788 00:13:48.788 Admin Command Set Attributes 00:13:48.788 ============================ 00:13:48.788 Security Send/Receive: Not Supported 00:13:48.788 Format NVM: Supported 00:13:48.788 Firmware Activate/Download: Not Supported 00:13:48.788 Namespace Management: Supported 00:13:48.788 Device Self-Test: Not Supported 00:13:48.788 Directives: Supported 00:13:48.788 NVMe-MI: Not Supported 00:13:48.788 Virtualization Management: Not Supported 00:13:48.788 Doorbell Buffer Config: Supported 00:13:48.788 Get LBA Status Capability: Not Supported 00:13:48.788 Command & Feature Lockdown Capability: Not Supported 00:13:48.788 Abort Command Limit: 4 00:13:48.788 Async Event Request Limit: 4 00:13:48.788 Number of Firmware Slots: N/A 00:13:48.788 Firmware Slot 1 Read-Only: N/A 00:13:48.788 Firmware Activation Without Reset: N/A 00:13:48.788 Multiple Update Detection Support: N/A 00:13:48.788 Firmware Update Granularity: No Information Provided 00:13:48.788 Per-Namespace SMART Log: Yes 00:13:48.788 Asymmetric Namespace Access Log Page: Not Supported 00:13:48.788 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:13:48.788 Command Effects Log Page: Supported 00:13:48.788 Get Log Page Extended Data: Supported 00:13:48.788 Telemetry Log Pages: Not Supported 00:13:48.788 Persistent Event Log Pages: Not Supported 00:13:48.788 Supported Log Pages Log Page: May Support 00:13:48.788 Commands Supported & Effects Log Page: Not Supported 00:13:48.788 Feature Identifiers & Effects Log Page:May Support 00:13:48.788 NVMe-MI Commands & Effects Log Page: May Support 00:13:48.788 Data Area 4 for Telemetry Log: Not Supported 00:13:48.788 Error Log Page Entries Supported: 1 00:13:48.788 Keep Alive: Not Supported 00:13:48.788 00:13:48.788 NVM Command Set Attributes 00:13:48.788 ========================== 00:13:48.788 Submission Queue Entry Size 00:13:48.788 Max: 64 00:13:48.788 Min: 64 00:13:48.788 Completion Queue Entry Size 00:13:48.788 Max: 16 00:13:48.788 Min: 16 00:13:48.788 Number of Namespaces: 256 00:13:48.788 Compare Command: Supported 00:13:48.788 Write Uncorrectable Command: Not Supported 00:13:48.788 Dataset Management Command: Supported 00:13:48.788 Write Zeroes Command: Supported 00:13:48.788 Set Features Save Field: Supported 00:13:48.788 Reservations: Not Supported 00:13:48.788 Timestamp: Supported 00:13:48.788 Copy: Supported 00:13:48.788 Volatile Write Cache: Present 00:13:48.788 Atomic Write Unit (Normal): 1 00:13:48.788 Atomic Write Unit (PFail): 1 00:13:48.788 Atomic Compare & Write Unit: 1 00:13:48.788 Fused Compare & Write: Not Supported 00:13:48.788 Scatter-Gather List 00:13:48.788 SGL Command Set: Supported 00:13:48.788 SGL Keyed: Not Supported 00:13:48.788 SGL Bit Bucket Descriptor: Not Supported 00:13:48.788 SGL Metadata Pointer: Not Supported 00:13:48.788 Oversized SGL: Not Supported 00:13:48.788 SGL Metadata Address: Not Supported 00:13:48.788 SGL Offset: Not Supported 00:13:48.788 Transport SGL Data Block: Not Supported 00:13:48.788 Replay Protected Memory Block: Not Supported 00:13:48.788 00:13:48.788 Firmware Slot Information 00:13:48.788 ========================= 00:13:48.788 Active slot: 1 00:13:48.788 Slot 1 Firmware Revision: 1.0 00:13:48.788 00:13:48.788 00:13:48.788 Commands Supported and Effects 00:13:48.788 ============================== 00:13:48.788 Admin Commands 00:13:48.788 -------------- 00:13:48.788 Delete I/O Submission Queue (00h): Supported 00:13:48.788 Create I/O Submission Queue (01h): Supported 00:13:48.788 Get Log Page (02h): Supported 00:13:48.788 Delete I/O Completion Queue (04h): Supported 00:13:48.788 Create I/O Completion Queue (05h): Supported 00:13:48.788 Identify (06h): Supported 00:13:48.788 Abort (08h): Supported 00:13:48.788 Set Features (09h): Supported 00:13:48.788 Get Features (0Ah): Supported 00:13:48.788 Asynchronous Event Request (0Ch): Supported 00:13:48.788 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:48.788 Directive Send (19h): Supported 00:13:48.788 Directive Receive (1Ah): Supported 00:13:48.788 Virtualization Management (1Ch): Supported 00:13:48.788 Doorbell Buffer Config (7Ch): Supported 00:13:48.788 Format NVM (80h): Supported LBA-Change 00:13:48.788 I/O Commands 00:13:48.788 ------------ 00:13:48.788 Flush (00h): Supported LBA-Change 00:13:48.788 Write (01h): Supported LBA-Change 00:13:48.788 Read (02h): Supported 00:13:48.788 Compare (05h): Supported 00:13:48.788 Write Zeroes (08h): Supported LBA-Change 00:13:48.788 Dataset Management (09h): Supported LBA-Change 00:13:48.788 Unknown (0Ch): Supported 00:13:48.788 Unknown (12h): Supported 00:13:48.788 Copy (19h): Supported LBA-Change 00:13:48.788 Unknown (1Dh): Supported LBA-Change 00:13:48.788 00:13:48.788 Error Log 00:13:48.788 ========= 00:13:48.788 00:13:48.788 Arbitration 00:13:48.788 =========== 00:13:48.788 Arbitration Burst: no limit 00:13:48.788 00:13:48.788 Power Management 00:13:48.788 ================ 00:13:48.788 Number of Power States: 1 00:13:48.788 Current Power State: Power State #0 00:13:48.788 Power State #0: 00:13:48.788 Max Power: 25.00 W 00:13:48.788 Non-Operational State: Operational 00:13:48.788 Entry Latency: 16 microseconds 00:13:48.788 Exit Latency: 4 microseconds 00:13:48.788 Relative Read Throughput: 0 00:13:48.788 Relative Read Latency: 0 00:13:48.788 Relative Write Throughput: 0 00:13:48.788 Relative Write Latency: 0 00:13:48.788 Idle Power[2024-07-26 03:42:03.444308] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 70036 terminated unexpected 00:13:48.788 : Not Reported 00:13:48.788 Active Power: Not Reported 00:13:48.788 Non-Operational Permissive Mode: Not Supported 00:13:48.788 00:13:48.788 Health Information 00:13:48.788 ================== 00:13:48.788 Critical Warnings: 00:13:48.788 Available Spare Space: OK 00:13:48.788 Temperature: OK 00:13:48.788 Device Reliability: OK 00:13:48.788 Read Only: No 00:13:48.788 Volatile Memory Backup: OK 00:13:48.789 Current Temperature: 323 Kelvin (50 Celsius) 00:13:48.789 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:48.789 Available Spare: 0% 00:13:48.789 Available Spare Threshold: 0% 00:13:48.789 Life Percentage Used: 0% 00:13:48.789 Data Units Read: 664 00:13:48.789 Data Units Written: 556 00:13:48.789 Host Read Commands: 31952 00:13:48.789 Host Write Commands: 30990 00:13:48.789 Controller Busy Time: 0 minutes 00:13:48.789 Power Cycles: 0 00:13:48.789 Power On Hours: 0 hours 00:13:48.789 Unsafe Shutdowns: 0 00:13:48.789 Unrecoverable Media Errors: 0 00:13:48.789 Lifetime Error Log Entries: 0 00:13:48.789 Warning Temperature Time: 0 minutes 00:13:48.789 Critical Temperature Time: 0 minutes 00:13:48.789 00:13:48.789 Number of Queues 00:13:48.789 ================ 00:13:48.789 Number of I/O Submission Queues: 64 00:13:48.789 Number of I/O Completion Queues: 64 00:13:48.789 00:13:48.789 ZNS Specific Controller Data 00:13:48.789 ============================ 00:13:48.789 Zone Append Size Limit: 0 00:13:48.789 00:13:48.789 00:13:48.789 Active Namespaces 00:13:48.789 ================= 00:13:48.789 Namespace ID:1 00:13:48.789 Error Recovery Timeout: Unlimited 00:13:48.789 Command Set Identifier: NVM (00h) 00:13:48.789 Deallocate: Supported 00:13:48.789 Deallocated/Unwritten Error: Supported 00:13:48.789 Deallocated Read Value: All 0x00 00:13:48.789 Deallocate in Write Zeroes: Not Supported 00:13:48.789 Deallocated Guard Field: 0xFFFF 00:13:48.789 Flush: Supported 00:13:48.789 Reservation: Not Supported 00:13:48.789 Metadata Transferred as: Separate Metadata Buffer 00:13:48.789 Namespace Sharing Capabilities: Private 00:13:48.789 Size (in LBAs): 1548666 (5GiB) 00:13:48.789 Capacity (in LBAs): 1548666 (5GiB) 00:13:48.789 Utilization (in LBAs): 1548666 (5GiB) 00:13:48.789 Thin Provisioning: Not Supported 00:13:48.789 Per-NS Atomic Units: No 00:13:48.789 Maximum Single Source Range Length: 128 00:13:48.789 Maximum Copy Length: 128 00:13:48.789 Maximum Source Range Count: 128 00:13:48.789 NGUID/EUI64 Never Reused: No 00:13:48.789 Namespace Write Protected: No 00:13:48.789 Number of LBA Formats: 8 00:13:48.789 Current LBA Format: LBA Format #07 00:13:48.789 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:48.789 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:48.789 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:48.789 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:48.789 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:48.789 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:48.789 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:48.789 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:48.789 00:13:48.789 NVM Specific Namespace Data 00:13:48.789 =========================== 00:13:48.789 Logical Block Storage Tag Mask: 0 00:13:48.789 Protection Information Capabilities: 00:13:48.789 16b Guard Protection Information Storage Tag Support: No 00:13:48.789 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:48.789 Storage Tag Check Read Support: No 00:13:48.789 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.789 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.789 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.789 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.789 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.789 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.789 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.789 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.789 ===================================================== 00:13:48.789 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:48.789 ===================================================== 00:13:48.789 Controller Capabilities/Features 00:13:48.789 ================================ 00:13:48.789 Vendor ID: 1b36 00:13:48.789 Subsystem Vendor ID: 1af4 00:13:48.789 Serial Number: 12341 00:13:48.789 Model Number: QEMU NVMe Ctrl 00:13:48.789 Firmware Version: 8.0.0 00:13:48.789 Recommended Arb Burst: 6 00:13:48.789 IEEE OUI Identifier: 00 54 52 00:13:48.789 Multi-path I/O 00:13:48.789 May have multiple subsystem ports: No 00:13:48.789 May have multiple controllers: No 00:13:48.789 Associated with SR-IOV VF: No 00:13:48.789 Max Data Transfer Size: 524288 00:13:48.789 Max Number of Namespaces: 256 00:13:48.789 Max Number of I/O Queues: 64 00:13:48.789 NVMe Specification Version (VS): 1.4 00:13:48.789 NVMe Specification Version (Identify): 1.4 00:13:48.789 Maximum Queue Entries: 2048 00:13:48.789 Contiguous Queues Required: Yes 00:13:48.789 Arbitration Mechanisms Supported 00:13:48.789 Weighted Round Robin: Not Supported 00:13:48.789 Vendor Specific: Not Supported 00:13:48.789 Reset Timeout: 7500 ms 00:13:48.789 Doorbell Stride: 4 bytes 00:13:48.789 NVM Subsystem Reset: Not Supported 00:13:48.789 Command Sets Supported 00:13:48.789 NVM Command Set: Supported 00:13:48.789 Boot Partition: Not Supported 00:13:48.789 Memory Page Size Minimum: 4096 bytes 00:13:48.789 Memory Page Size Maximum: 65536 bytes 00:13:48.789 Persistent Memory Region: Not Supported 00:13:48.789 Optional Asynchronous Events Supported 00:13:48.789 Namespace Attribute Notices: Supported 00:13:48.789 Firmware Activation Notices: Not Supported 00:13:48.789 ANA Change Notices: Not Supported 00:13:48.789 PLE Aggregate Log Change Notices: Not Supported 00:13:48.789 LBA Status Info Alert Notices: Not Supported 00:13:48.789 EGE Aggregate Log Change Notices: Not Supported 00:13:48.789 Normal NVM Subsystem Shutdown event: Not Supported 00:13:48.789 Zone Descriptor Change Notices: Not Supported 00:13:48.789 Discovery Log Change Notices: Not Supported 00:13:48.789 Controller Attributes 00:13:48.789 128-bit Host Identifier: Not Supported 00:13:48.789 Non-Operational Permissive Mode: Not Supported 00:13:48.789 NVM Sets: Not Supported 00:13:48.789 Read Recovery Levels: Not Supported 00:13:48.789 Endurance Groups: Not Supported 00:13:48.789 Predictable Latency Mode: Not Supported 00:13:48.789 Traffic Based Keep ALive: Not Supported 00:13:48.789 Namespace Granularity: Not Supported 00:13:48.789 SQ Associations: Not Supported 00:13:48.789 UUID List: Not Supported 00:13:48.789 Multi-Domain Subsystem: Not Supported 00:13:48.789 Fixed Capacity Management: Not Supported 00:13:48.789 Variable Capacity Management: Not Supported 00:13:48.789 Delete Endurance Group: Not Supported 00:13:48.789 Delete NVM Set: Not Supported 00:13:48.789 Extended LBA Formats Supported: Supported 00:13:48.789 Flexible Data Placement Supported: Not Supported 00:13:48.789 00:13:48.789 Controller Memory Buffer Support 00:13:48.789 ================================ 00:13:48.789 Supported: No 00:13:48.789 00:13:48.789 Persistent Memory Region Support 00:13:48.789 ================================ 00:13:48.789 Supported: No 00:13:48.789 00:13:48.789 Admin Command Set Attributes 00:13:48.789 ============================ 00:13:48.789 Security Send/Receive: Not Supported 00:13:48.789 Format NVM: Supported 00:13:48.789 Firmware Activate/Download: Not Supported 00:13:48.789 Namespace Management: Supported 00:13:48.789 Device Self-Test: Not Supported 00:13:48.789 Directives: Supported 00:13:48.789 NVMe-MI: Not Supported 00:13:48.789 Virtualization Management: Not Supported 00:13:48.789 Doorbell Buffer Config: Supported 00:13:48.789 Get LBA Status Capability: Not Supported 00:13:48.789 Command & Feature Lockdown Capability: Not Supported 00:13:48.789 Abort Command Limit: 4 00:13:48.789 Async Event Request Limit: 4 00:13:48.789 Number of Firmware Slots: N/A 00:13:48.789 Firmware Slot 1 Read-Only: N/A 00:13:48.789 Firmware Activation Without Reset: N/A 00:13:48.789 Multiple Update Detection Support: N/A 00:13:48.789 Firmware Update Granularity: No Information Provided 00:13:48.789 Per-Namespace SMART Log: Yes 00:13:48.789 Asymmetric Namespace Access Log Page: Not Supported 00:13:48.789 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:13:48.789 Command Effects Log Page: Supported 00:13:48.789 Get Log Page Extended Data: Supported 00:13:48.789 Telemetry Log Pages: Not Supported 00:13:48.789 Persistent Event Log Pages: Not Supported 00:13:48.789 Supported Log Pages Log Page: May Support 00:13:48.789 Commands Supported & Effects Log Page: Not Supported 00:13:48.790 Feature Identifiers & Effects Log Page:May Support 00:13:48.790 NVMe-MI Commands & Effects Log Page: May Support 00:13:48.790 Data Area 4 for Telemetry Log: Not Supported 00:13:48.790 Error Log Page Entries Supported: 1 00:13:48.790 Keep Alive: Not Supported 00:13:48.790 00:13:48.790 NVM Command Set Attributes 00:13:48.790 ========================== 00:13:48.790 Submission Queue Entry Size 00:13:48.790 Max: 64 00:13:48.790 Min: 64 00:13:48.790 Completion Queue Entry Size 00:13:48.790 Max: 16 00:13:48.790 Min: 16 00:13:48.790 Number of Namespaces: 256 00:13:48.790 Compare Command: Supported 00:13:48.790 Write Uncorrectable Command: Not Supported 00:13:48.790 Dataset Management Command: Supported 00:13:48.790 Write Zeroes Command: Supported 00:13:48.790 Set Features Save Field: Supported 00:13:48.790 Reservations: Not Supported 00:13:48.790 Timestamp: Supported 00:13:48.790 Copy: Supported 00:13:48.790 Volatile Write Cache: Present 00:13:48.790 Atomic Write Unit (Normal): 1 00:13:48.790 Atomic Write Unit (PFail): 1 00:13:48.790 Atomic Compare & Write Unit: 1 00:13:48.790 Fused Compare & Write: Not Supported 00:13:48.790 Scatter-Gather List 00:13:48.790 SGL Command Set: Supported 00:13:48.790 SGL Keyed: Not Supported 00:13:48.790 SGL Bit Bucket Descriptor: Not Supported 00:13:48.790 SGL Metadata Pointer: Not Supported 00:13:48.790 Oversized SGL: Not Supported 00:13:48.790 SGL Metadata Address: Not Supported 00:13:48.790 SGL Offset: Not Supported 00:13:48.790 Transport SGL Data Block: Not Supported 00:13:48.790 Replay Protected Memory Block: Not Supported 00:13:48.790 00:13:48.790 Firmware Slot Information 00:13:48.790 ========================= 00:13:48.790 Active slot: 1 00:13:48.790 Slot 1 Firmware Revision: 1.0 00:13:48.790 00:13:48.790 00:13:48.790 Commands Supported and Effects 00:13:48.790 ============================== 00:13:48.790 Admin Commands 00:13:48.790 -------------- 00:13:48.790 Delete I/O Submission Queue (00h): Supported 00:13:48.790 Create I/O Submission Queue (01h): Supported 00:13:48.790 Get Log Page (02h): Supported 00:13:48.790 Delete I/O Completion Queue (04h): Supported 00:13:48.790 Create I/O Completion Queue (05h): Supported 00:13:48.790 Identify (06h): Supported 00:13:48.790 Abort (08h): Supported 00:13:48.790 Set Features (09h): Supported 00:13:48.790 Get Features (0Ah): Supported 00:13:48.790 Asynchronous Event Request (0Ch): Supported 00:13:48.790 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:48.790 Directive Send (19h): Supported 00:13:48.790 Directive Receive (1Ah): Supported 00:13:48.790 Virtualization Management (1Ch): Supported 00:13:48.790 Doorbell Buffer Config (7Ch): Supported 00:13:48.790 Format NVM (80h): Supported LBA-Change 00:13:48.790 I/O Commands 00:13:48.790 ------------ 00:13:48.790 Flush (00h): Supported LBA-Change 00:13:48.790 Write (01h): Supported LBA-Change 00:13:48.790 Read (02h): Supported 00:13:48.790 Compare (05h): Supported 00:13:48.790 Write Zeroes (08h): Supported LBA-Change 00:13:48.790 Dataset Management (09h): Supported LBA-Change 00:13:48.790 Unknown (0Ch): Supported 00:13:48.790 Unknown (12h): Supported 00:13:48.790 Copy (19h): Supported LBA-Change 00:13:48.790 Unknown (1Dh): Supported LBA-Change 00:13:48.790 00:13:48.790 Error Log 00:13:48.790 ========= 00:13:48.790 00:13:48.790 Arbitration 00:13:48.790 =========== 00:13:48.790 Arbitration Burst: no limit 00:13:48.790 00:13:48.790 Power Management 00:13:48.790 ================ 00:13:48.790 Number of Power States: 1 00:13:48.790 Current Power State: Power State #0 00:13:48.790 Power State #0: 00:13:48.790 Max Power: 25.00 W 00:13:48.790 Non-Operational State: Operational 00:13:48.790 Entry Latency: 16 microseconds 00:13:48.790 Exit Latency: 4 microseconds 00:13:48.790 Relative Read Throughput: 0 00:13:48.790 Relative Read Latency: 0 00:13:48.790 Relative Write Throughput: 0 00:13:48.790 Relative Write Latency: 0 00:13:48.790 Idle Power: Not Reported 00:13:48.790 Active Power: Not Reported 00:13:48.790 Non-Operational Permissive Mode: Not Supported 00:13:48.790 00:13:48.790 Health Information 00:13:48.790 ================== 00:13:48.790 Critical Warnings: 00:13:48.790 Available Spare Space: OK 00:13:48.790 Temperature: [2024-07-26 03:42:03.445553] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 70036 terminated unexpected 00:13:48.790 OK 00:13:48.790 Device Reliability: OK 00:13:48.790 Read Only: No 00:13:48.790 Volatile Memory Backup: OK 00:13:48.790 Current Temperature: 323 Kelvin (50 Celsius) 00:13:48.790 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:48.790 Available Spare: 0% 00:13:48.790 Available Spare Threshold: 0% 00:13:48.790 Life Percentage Used: 0% 00:13:48.790 Data Units Read: 1078 00:13:48.790 Data Units Written: 861 00:13:48.790 Host Read Commands: 48063 00:13:48.790 Host Write Commands: 45069 00:13:48.790 Controller Busy Time: 0 minutes 00:13:48.790 Power Cycles: 0 00:13:48.790 Power On Hours: 0 hours 00:13:48.790 Unsafe Shutdowns: 0 00:13:48.790 Unrecoverable Media Errors: 0 00:13:48.790 Lifetime Error Log Entries: 0 00:13:48.790 Warning Temperature Time: 0 minutes 00:13:48.790 Critical Temperature Time: 0 minutes 00:13:48.790 00:13:48.790 Number of Queues 00:13:48.790 ================ 00:13:48.790 Number of I/O Submission Queues: 64 00:13:48.790 Number of I/O Completion Queues: 64 00:13:48.790 00:13:48.790 ZNS Specific Controller Data 00:13:48.790 ============================ 00:13:48.790 Zone Append Size Limit: 0 00:13:48.790 00:13:48.790 00:13:48.790 Active Namespaces 00:13:48.790 ================= 00:13:48.790 Namespace ID:1 00:13:48.790 Error Recovery Timeout: Unlimited 00:13:48.790 Command Set Identifier: NVM (00h) 00:13:48.790 Deallocate: Supported 00:13:48.790 Deallocated/Unwritten Error: Supported 00:13:48.790 Deallocated Read Value: All 0x00 00:13:48.790 Deallocate in Write Zeroes: Not Supported 00:13:48.790 Deallocated Guard Field: 0xFFFF 00:13:48.790 Flush: Supported 00:13:48.790 Reservation: Not Supported 00:13:48.790 Namespace Sharing Capabilities: Private 00:13:48.790 Size (in LBAs): 1310720 (5GiB) 00:13:48.790 Capacity (in LBAs): 1310720 (5GiB) 00:13:48.790 Utilization (in LBAs): 1310720 (5GiB) 00:13:48.790 Thin Provisioning: Not Supported 00:13:48.790 Per-NS Atomic Units: No 00:13:48.790 Maximum Single Source Range Length: 128 00:13:48.790 Maximum Copy Length: 128 00:13:48.790 Maximum Source Range Count: 128 00:13:48.790 NGUID/EUI64 Never Reused: No 00:13:48.790 Namespace Write Protected: No 00:13:48.790 Number of LBA Formats: 8 00:13:48.790 Current LBA Format: LBA Format #04 00:13:48.790 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:48.790 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:48.790 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:48.790 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:48.790 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:48.790 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:48.790 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:48.790 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:48.790 00:13:48.790 NVM Specific Namespace Data 00:13:48.790 =========================== 00:13:48.790 Logical Block Storage Tag Mask: 0 00:13:48.790 Protection Information Capabilities: 00:13:48.790 16b Guard Protection Information Storage Tag Support: No 00:13:48.790 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:48.790 Storage Tag Check Read Support: No 00:13:48.790 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.790 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.790 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.790 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.790 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.790 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.790 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.790 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.790 ===================================================== 00:13:48.790 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:48.790 ===================================================== 00:13:48.790 Controller Capabilities/Features 00:13:48.791 ================================ 00:13:48.791 Vendor ID: 1b36 00:13:48.791 Subsystem Vendor ID: 1af4 00:13:48.791 Serial Number: 12343 00:13:48.791 Model Number: QEMU NVMe Ctrl 00:13:48.791 Firmware Version: 8.0.0 00:13:48.791 Recommended Arb Burst: 6 00:13:48.791 IEEE OUI Identifier: 00 54 52 00:13:48.791 Multi-path I/O 00:13:48.791 May have multiple subsystem ports: No 00:13:48.791 May have multiple controllers: Yes 00:13:48.791 Associated with SR-IOV VF: No 00:13:48.791 Max Data Transfer Size: 524288 00:13:48.791 Max Number of Namespaces: 256 00:13:48.791 Max Number of I/O Queues: 64 00:13:48.791 NVMe Specification Version (VS): 1.4 00:13:48.791 NVMe Specification Version (Identify): 1.4 00:13:48.791 Maximum Queue Entries: 2048 00:13:48.791 Contiguous Queues Required: Yes 00:13:48.791 Arbitration Mechanisms Supported 00:13:48.791 Weighted Round Robin: Not Supported 00:13:48.791 Vendor Specific: Not Supported 00:13:48.791 Reset Timeout: 7500 ms 00:13:48.791 Doorbell Stride: 4 bytes 00:13:48.791 NVM Subsystem Reset: Not Supported 00:13:48.791 Command Sets Supported 00:13:48.791 NVM Command Set: Supported 00:13:48.791 Boot Partition: Not Supported 00:13:48.791 Memory Page Size Minimum: 4096 bytes 00:13:48.791 Memory Page Size Maximum: 65536 bytes 00:13:48.791 Persistent Memory Region: Not Supported 00:13:48.791 Optional Asynchronous Events Supported 00:13:48.791 Namespace Attribute Notices: Supported 00:13:48.791 Firmware Activation Notices: Not Supported 00:13:48.791 ANA Change Notices: Not Supported 00:13:48.791 PLE Aggregate Log Change Notices: Not Supported 00:13:48.791 LBA Status Info Alert Notices: Not Supported 00:13:48.791 EGE Aggregate Log Change Notices: Not Supported 00:13:48.791 Normal NVM Subsystem Shutdown event: Not Supported 00:13:48.791 Zone Descriptor Change Notices: Not Supported 00:13:48.791 Discovery Log Change Notices: Not Supported 00:13:48.791 Controller Attributes 00:13:48.791 128-bit Host Identifier: Not Supported 00:13:48.791 Non-Operational Permissive Mode: Not Supported 00:13:48.791 NVM Sets: Not Supported 00:13:48.791 Read Recovery Levels: Not Supported 00:13:48.791 Endurance Groups: Supported 00:13:48.791 Predictable Latency Mode: Not Supported 00:13:48.791 Traffic Based Keep ALive: Not Supported 00:13:48.791 Namespace Granularity: Not Supported 00:13:48.791 SQ Associations: Not Supported 00:13:48.791 UUID List: Not Supported 00:13:48.791 Multi-Domain Subsystem: Not Supported 00:13:48.791 Fixed Capacity Management: Not Supported 00:13:48.791 Variable Capacity Management: Not Supported 00:13:48.791 Delete Endurance Group: Not Supported 00:13:48.791 Delete NVM Set: Not Supported 00:13:48.791 Extended LBA Formats Supported: Supported 00:13:48.791 Flexible Data Placement Supported: Supported 00:13:48.791 00:13:48.791 Controller Memory Buffer Support 00:13:48.791 ================================ 00:13:48.791 Supported: No 00:13:48.791 00:13:48.791 Persistent Memory Region Support 00:13:48.791 ================================ 00:13:48.791 Supported: No 00:13:48.791 00:13:48.791 Admin Command Set Attributes 00:13:48.791 ============================ 00:13:48.791 Security Send/Receive: Not Supported 00:13:48.791 Format NVM: Supported 00:13:48.791 Firmware Activate/Download: Not Supported 00:13:48.791 Namespace Management: Supported 00:13:48.791 Device Self-Test: Not Supported 00:13:48.791 Directives: Supported 00:13:48.791 NVMe-MI: Not Supported 00:13:48.791 Virtualization Management: Not Supported 00:13:48.791 Doorbell Buffer Config: Supported 00:13:48.791 Get LBA Status Capability: Not Supported 00:13:48.791 Command & Feature Lockdown Capability: Not Supported 00:13:48.791 Abort Command Limit: 4 00:13:48.791 Async Event Request Limit: 4 00:13:48.791 Number of Firmware Slots: N/A 00:13:48.791 Firmware Slot 1 Read-Only: N/A 00:13:48.791 Firmware Activation Without Reset: N/A 00:13:48.791 Multiple Update Detection Support: N/A 00:13:48.791 Firmware Update Granularity: No Information Provided 00:13:48.791 Per-Namespace SMART Log: Yes 00:13:48.791 Asymmetric Namespace Access Log Page: Not Supported 00:13:48.791 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:13:48.791 Command Effects Log Page: Supported 00:13:48.791 Get Log Page Extended Data: Supported 00:13:48.791 Telemetry Log Pages: Not Supported 00:13:48.791 Persistent Event Log Pages: Not Supported 00:13:48.791 Supported Log Pages Log Page: May Support 00:13:48.791 Commands Supported & Effects Log Page: Not Supported 00:13:48.791 Feature Identifiers & Effects Log Page:May Support 00:13:48.791 NVMe-MI Commands & Effects Log Page: May Support 00:13:48.791 Data Area 4 for Telemetry Log: Not Supported 00:13:48.791 Error Log Page Entries Supported: 1 00:13:48.791 Keep Alive: Not Supported 00:13:48.791 00:13:48.791 NVM Command Set Attributes 00:13:48.791 ========================== 00:13:48.791 Submission Queue Entry Size 00:13:48.791 Max: 64 00:13:48.791 Min: 64 00:13:48.791 Completion Queue Entry Size 00:13:48.791 Max: 16 00:13:48.791 Min: 16 00:13:48.791 Number of Namespaces: 256 00:13:48.791 Compare Command: Supported 00:13:48.791 Write Uncorrectable Command: Not Supported 00:13:48.791 Dataset Management Command: Supported 00:13:48.791 Write Zeroes Command: Supported 00:13:48.791 Set Features Save Field: Supported 00:13:48.791 Reservations: Not Supported 00:13:48.791 Timestamp: Supported 00:13:48.791 Copy: Supported 00:13:48.791 Volatile Write Cache: Present 00:13:48.791 Atomic Write Unit (Normal): 1 00:13:48.791 Atomic Write Unit (PFail): 1 00:13:48.791 Atomic Compare & Write Unit: 1 00:13:48.791 Fused Compare & Write: Not Supported 00:13:48.791 Scatter-Gather List 00:13:48.791 SGL Command Set: Supported 00:13:48.791 SGL Keyed: Not Supported 00:13:48.791 SGL Bit Bucket Descriptor: Not Supported 00:13:48.791 SGL Metadata Pointer: Not Supported 00:13:48.791 Oversized SGL: Not Supported 00:13:48.791 SGL Metadata Address: Not Supported 00:13:48.791 SGL Offset: Not Supported 00:13:48.791 Transport SGL Data Block: Not Supported 00:13:48.791 Replay Protected Memory Block: Not Supported 00:13:48.791 00:13:48.791 Firmware Slot Information 00:13:48.791 ========================= 00:13:48.791 Active slot: 1 00:13:48.791 Slot 1 Firmware Revision: 1.0 00:13:48.791 00:13:48.791 00:13:48.791 Commands Supported and Effects 00:13:48.791 ============================== 00:13:48.791 Admin Commands 00:13:48.791 -------------- 00:13:48.791 Delete I/O Submission Queue (00h): Supported 00:13:48.791 Create I/O Submission Queue (01h): Supported 00:13:48.791 Get Log Page (02h): Supported 00:13:48.791 Delete I/O Completion Queue (04h): Supported 00:13:48.791 Create I/O Completion Queue (05h): Supported 00:13:48.791 Identify (06h): Supported 00:13:48.791 Abort (08h): Supported 00:13:48.791 Set Features (09h): Supported 00:13:48.791 Get Features (0Ah): Supported 00:13:48.791 Asynchronous Event Request (0Ch): Supported 00:13:48.791 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:48.791 Directive Send (19h): Supported 00:13:48.791 Directive Receive (1Ah): Supported 00:13:48.791 Virtualization Management (1Ch): Supported 00:13:48.791 Doorbell Buffer Config (7Ch): Supported 00:13:48.791 Format NVM (80h): Supported LBA-Change 00:13:48.791 I/O Commands 00:13:48.791 ------------ 00:13:48.791 Flush (00h): Supported LBA-Change 00:13:48.791 Write (01h): Supported LBA-Change 00:13:48.791 Read (02h): Supported 00:13:48.791 Compare (05h): Supported 00:13:48.791 Write Zeroes (08h): Supported LBA-Change 00:13:48.791 Dataset Management (09h): Supported LBA-Change 00:13:48.791 Unknown (0Ch): Supported 00:13:48.791 Unknown (12h): Supported 00:13:48.791 Copy (19h): Supported LBA-Change 00:13:48.791 Unknown (1Dh): Supported LBA-Change 00:13:48.791 00:13:48.791 Error Log 00:13:48.791 ========= 00:13:48.791 00:13:48.791 Arbitration 00:13:48.791 =========== 00:13:48.791 Arbitration Burst: no limit 00:13:48.791 00:13:48.791 Power Management 00:13:48.791 ================ 00:13:48.791 Number of Power States: 1 00:13:48.791 Current Power State: Power State #0 00:13:48.791 Power State #0: 00:13:48.791 Max Power: 25.00 W 00:13:48.791 Non-Operational State: Operational 00:13:48.791 Entry Latency: 16 microseconds 00:13:48.791 Exit Latency: 4 microseconds 00:13:48.791 Relative Read Throughput: 0 00:13:48.791 Relative Read Latency: 0 00:13:48.791 Relative Write Throughput: 0 00:13:48.792 Relative Write Latency: 0 00:13:48.792 Idle Power: Not Reported 00:13:48.792 Active Power: Not Reported 00:13:48.792 Non-Operational Permissive Mode: Not Supported 00:13:48.792 00:13:48.792 Health Information 00:13:48.792 ================== 00:13:48.792 Critical Warnings: 00:13:48.792 Available Spare Space: OK 00:13:48.792 Temperature: OK 00:13:48.792 Device Reliability: OK 00:13:48.792 Read Only: No 00:13:48.792 Volatile Memory Backup: OK 00:13:48.792 Current Temperature: 323 Kelvin (50 Celsius) 00:13:48.792 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:48.792 Available Spare: 0% 00:13:48.792 Available Spare Threshold: 0% 00:13:48.792 Life Percentage Used: 0% 00:13:48.792 Data Units Read: 781 00:13:48.792 Data Units Written: 675 00:13:48.792 Host Read Commands: 33375 00:13:48.792 Host Write Commands: 31965 00:13:48.792 Controller Busy Time: 0 minutes 00:13:48.792 Power Cycles: 0 00:13:48.792 Power On Hours: 0 hours 00:13:48.792 Unsafe Shutdowns: 0 00:13:48.792 Unrecoverable Media Errors: 0 00:13:48.792 Lifetime Error Log Entries: 0 00:13:48.792 Warning Temperature Time: 0 minutes 00:13:48.792 Critical Temperature Time: 0 minutes 00:13:48.792 00:13:48.792 Number of Queues 00:13:48.792 ================ 00:13:48.792 Number of I/O Submission Queues: 64 00:13:48.792 Number of I/O Completion Queues: 64 00:13:48.792 00:13:48.792 ZNS Specific Controller Data 00:13:48.792 ============================ 00:13:48.792 Zone Append Size Limit: 0 00:13:48.792 00:13:48.792 00:13:48.792 Active Namespaces 00:13:48.792 ================= 00:13:48.792 Namespace ID:1 00:13:48.792 Error Recovery Timeout: Unlimited 00:13:48.792 Command Set Identifier: NVM (00h) 00:13:48.792 Deallocate: Supported 00:13:48.792 Deallocated/Unwritten Error: Supported 00:13:48.792 Deallocated Read Value: All 0x00 00:13:48.792 Deallocate in Write Zeroes: Not Supported 00:13:48.792 Deallocated Guard Field: 0xFFFF 00:13:48.792 Flush: Supported 00:13:48.792 Reservation: Not Supported 00:13:48.792 Namespace Sharing Capabilities: Multiple Controllers 00:13:48.792 Size (in LBAs): 262144 (1GiB) 00:13:48.792 Capacity (in LBAs): 262144 (1GiB) 00:13:48.792 Utilization (in LBAs): 262144 (1GiB) 00:13:48.792 Thin Provisioning: Not Supported 00:13:48.792 Per-NS Atomic Units: No 00:13:48.792 Maximum Single Source Range Length: 128 00:13:48.792 Maximum Copy Length: 128 00:13:48.792 Maximum Source Range Count: 128 00:13:48.792 NGUID/EUI64 Never Reused: No 00:13:48.792 Namespace Write Protected: No 00:13:48.792 Endurance group ID: 1 00:13:48.792 Number of LBA Formats: 8 00:13:48.792 Current LBA Format: LBA Format #04 00:13:48.792 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:48.792 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:48.792 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:48.792 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:48.792 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:48.792 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:48.792 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:48.792 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:48.792 00:13:48.792 Get Feature FDP: 00:13:48.792 ================ 00:13:48.792 Enabled: Yes 00:13:48.792 FDP configuration index: 0 00:13:48.792 00:13:48.792 FDP configurations log page 00:13:48.792 =========================== 00:13:48.792 Number of FDP configurations: 1 00:13:48.792 Version: 0 00:13:48.792 Size: 112 00:13:48.792 FDP Configuration Descriptor: 0 00:13:48.792 Descriptor Size: 96 00:13:48.792 Reclaim Group Identifier format: 2 00:13:48.792 FDP Volatile Write Cache: Not Present 00:13:48.792 FDP Configuration: Valid 00:13:48.792 Vendor Specific Size: 0 00:13:48.792 Number of Reclaim Groups: 2 00:13:48.792 Number of Recalim Unit Handles: 8 00:13:48.792 Max Placement Identifiers: 128 00:13:48.792 Number of Namespaces Suppprted: 256 00:13:48.792 Reclaim unit Nominal Size: 6000000 bytes 00:13:48.792 Estimated Reclaim Unit Time Limit: Not Reported 00:13:48.792 RUH Desc #000: RUH Type: Initially Isolated 00:13:48.792 RUH Desc #001: RUH Type: Initially Isolated 00:13:48.792 RUH Desc #002: RUH Type: Initially Isolated 00:13:48.792 RUH Desc #003: RUH Type: Initially Isolated 00:13:48.792 RUH Desc #004: RUH Type: Initially Isolated 00:13:48.792 RUH Desc #005: RUH Type: Initially Isolated 00:13:48.792 RUH Desc #006: RUH Type: Initially Isolated 00:13:48.792 RUH Desc #007: RUH Type: Initially Isolated 00:13:48.792 00:13:48.792 FDP reclaim unit handle usage log page 00:13:48.792 ====================================== 00:13:48.792 Number of Reclaim Unit Handles: 8 00:13:48.792 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:13:48.792 RUH Usage Desc #001: RUH Attributes: Unused 00:13:48.792 RUH Usage Desc #002: RUH Attributes: Unused 00:13:48.792 RUH Usage Desc #003: RUH Attributes: Unused 00:13:48.792 RUH Usage Desc #004: RUH Attributes: Unused 00:13:48.792 RUH Usage Desc #005: RUH Attributes: Unused 00:13:48.792 RUH Usage Desc #006: RUH Attributes: Unused 00:13:48.792 RUH Usage Desc #007: RUH Attributes: Unused 00:13:48.792 00:13:48.792 FDP statistics log page 00:13:48.792 ======================= 00:13:48.792 Host bytes with metadata written: 410427392 00:13:48.792 Medi[2024-07-26 03:42:03.447775] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 70036 terminated unexpected 00:13:48.792 a bytes with metadata written: 410472448 00:13:48.792 Media bytes erased: 0 00:13:48.792 00:13:48.792 FDP events log page 00:13:48.792 =================== 00:13:48.792 Number of FDP events: 0 00:13:48.792 00:13:48.792 NVM Specific Namespace Data 00:13:48.792 =========================== 00:13:48.792 Logical Block Storage Tag Mask: 0 00:13:48.792 Protection Information Capabilities: 00:13:48.792 16b Guard Protection Information Storage Tag Support: No 00:13:48.792 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:48.792 Storage Tag Check Read Support: No 00:13:48.792 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.792 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.792 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.792 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.792 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.792 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.792 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.792 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.792 ===================================================== 00:13:48.792 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:48.792 ===================================================== 00:13:48.792 Controller Capabilities/Features 00:13:48.792 ================================ 00:13:48.792 Vendor ID: 1b36 00:13:48.792 Subsystem Vendor ID: 1af4 00:13:48.792 Serial Number: 12342 00:13:48.792 Model Number: QEMU NVMe Ctrl 00:13:48.792 Firmware Version: 8.0.0 00:13:48.792 Recommended Arb Burst: 6 00:13:48.792 IEEE OUI Identifier: 00 54 52 00:13:48.792 Multi-path I/O 00:13:48.792 May have multiple subsystem ports: No 00:13:48.792 May have multiple controllers: No 00:13:48.792 Associated with SR-IOV VF: No 00:13:48.792 Max Data Transfer Size: 524288 00:13:48.792 Max Number of Namespaces: 256 00:13:48.792 Max Number of I/O Queues: 64 00:13:48.792 NVMe Specification Version (VS): 1.4 00:13:48.792 NVMe Specification Version (Identify): 1.4 00:13:48.792 Maximum Queue Entries: 2048 00:13:48.792 Contiguous Queues Required: Yes 00:13:48.792 Arbitration Mechanisms Supported 00:13:48.792 Weighted Round Robin: Not Supported 00:13:48.792 Vendor Specific: Not Supported 00:13:48.792 Reset Timeout: 7500 ms 00:13:48.792 Doorbell Stride: 4 bytes 00:13:48.793 NVM Subsystem Reset: Not Supported 00:13:48.793 Command Sets Supported 00:13:48.793 NVM Command Set: Supported 00:13:48.793 Boot Partition: Not Supported 00:13:48.793 Memory Page Size Minimum: 4096 bytes 00:13:48.793 Memory Page Size Maximum: 65536 bytes 00:13:48.793 Persistent Memory Region: Not Supported 00:13:48.793 Optional Asynchronous Events Supported 00:13:48.793 Namespace Attribute Notices: Supported 00:13:48.793 Firmware Activation Notices: Not Supported 00:13:48.793 ANA Change Notices: Not Supported 00:13:48.793 PLE Aggregate Log Change Notices: Not Supported 00:13:48.793 LBA Status Info Alert Notices: Not Supported 00:13:48.793 EGE Aggregate Log Change Notices: Not Supported 00:13:48.793 Normal NVM Subsystem Shutdown event: Not Supported 00:13:48.793 Zone Descriptor Change Notices: Not Supported 00:13:48.793 Discovery Log Change Notices: Not Supported 00:13:48.793 Controller Attributes 00:13:48.793 128-bit Host Identifier: Not Supported 00:13:48.793 Non-Operational Permissive Mode: Not Supported 00:13:48.793 NVM Sets: Not Supported 00:13:48.793 Read Recovery Levels: Not Supported 00:13:48.793 Endurance Groups: Not Supported 00:13:48.793 Predictable Latency Mode: Not Supported 00:13:48.793 Traffic Based Keep ALive: Not Supported 00:13:48.793 Namespace Granularity: Not Supported 00:13:48.793 SQ Associations: Not Supported 00:13:48.793 UUID List: Not Supported 00:13:48.793 Multi-Domain Subsystem: Not Supported 00:13:48.793 Fixed Capacity Management: Not Supported 00:13:48.793 Variable Capacity Management: Not Supported 00:13:48.793 Delete Endurance Group: Not Supported 00:13:48.793 Delete NVM Set: Not Supported 00:13:48.793 Extended LBA Formats Supported: Supported 00:13:48.793 Flexible Data Placement Supported: Not Supported 00:13:48.793 00:13:48.793 Controller Memory Buffer Support 00:13:48.793 ================================ 00:13:48.793 Supported: No 00:13:48.793 00:13:48.793 Persistent Memory Region Support 00:13:48.793 ================================ 00:13:48.793 Supported: No 00:13:48.793 00:13:48.793 Admin Command Set Attributes 00:13:48.793 ============================ 00:13:48.793 Security Send/Receive: Not Supported 00:13:48.793 Format NVM: Supported 00:13:48.793 Firmware Activate/Download: Not Supported 00:13:48.793 Namespace Management: Supported 00:13:48.793 Device Self-Test: Not Supported 00:13:48.793 Directives: Supported 00:13:48.793 NVMe-MI: Not Supported 00:13:48.793 Virtualization Management: Not Supported 00:13:48.793 Doorbell Buffer Config: Supported 00:13:48.793 Get LBA Status Capability: Not Supported 00:13:48.793 Command & Feature Lockdown Capability: Not Supported 00:13:48.793 Abort Command Limit: 4 00:13:48.793 Async Event Request Limit: 4 00:13:48.793 Number of Firmware Slots: N/A 00:13:48.793 Firmware Slot 1 Read-Only: N/A 00:13:48.793 Firmware Activation Without Reset: N/A 00:13:48.793 Multiple Update Detection Support: N/A 00:13:48.793 Firmware Update Granularity: No Information Provided 00:13:48.793 Per-Namespace SMART Log: Yes 00:13:48.793 Asymmetric Namespace Access Log Page: Not Supported 00:13:48.793 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:13:48.793 Command Effects Log Page: Supported 00:13:48.793 Get Log Page Extended Data: Supported 00:13:48.793 Telemetry Log Pages: Not Supported 00:13:48.793 Persistent Event Log Pages: Not Supported 00:13:48.793 Supported Log Pages Log Page: May Support 00:13:48.793 Commands Supported & Effects Log Page: Not Supported 00:13:48.793 Feature Identifiers & Effects Log Page:May Support 00:13:48.793 NVMe-MI Commands & Effects Log Page: May Support 00:13:48.793 Data Area 4 for Telemetry Log: Not Supported 00:13:48.793 Error Log Page Entries Supported: 1 00:13:48.793 Keep Alive: Not Supported 00:13:48.793 00:13:48.793 NVM Command Set Attributes 00:13:48.793 ========================== 00:13:48.793 Submission Queue Entry Size 00:13:48.793 Max: 64 00:13:48.793 Min: 64 00:13:48.793 Completion Queue Entry Size 00:13:48.793 Max: 16 00:13:48.793 Min: 16 00:13:48.793 Number of Namespaces: 256 00:13:48.793 Compare Command: Supported 00:13:48.793 Write Uncorrectable Command: Not Supported 00:13:48.793 Dataset Management Command: Supported 00:13:48.793 Write Zeroes Command: Supported 00:13:48.793 Set Features Save Field: Supported 00:13:48.793 Reservations: Not Supported 00:13:48.793 Timestamp: Supported 00:13:48.793 Copy: Supported 00:13:48.793 Volatile Write Cache: Present 00:13:48.793 Atomic Write Unit (Normal): 1 00:13:48.793 Atomic Write Unit (PFail): 1 00:13:48.793 Atomic Compare & Write Unit: 1 00:13:48.793 Fused Compare & Write: Not Supported 00:13:48.793 Scatter-Gather List 00:13:48.793 SGL Command Set: Supported 00:13:48.793 SGL Keyed: Not Supported 00:13:48.793 SGL Bit Bucket Descriptor: Not Supported 00:13:48.793 SGL Metadata Pointer: Not Supported 00:13:48.793 Oversized SGL: Not Supported 00:13:48.793 SGL Metadata Address: Not Supported 00:13:48.793 SGL Offset: Not Supported 00:13:48.793 Transport SGL Data Block: Not Supported 00:13:48.793 Replay Protected Memory Block: Not Supported 00:13:48.793 00:13:48.793 Firmware Slot Information 00:13:48.793 ========================= 00:13:48.793 Active slot: 1 00:13:48.793 Slot 1 Firmware Revision: 1.0 00:13:48.793 00:13:48.793 00:13:48.793 Commands Supported and Effects 00:13:48.793 ============================== 00:13:48.793 Admin Commands 00:13:48.793 -------------- 00:13:48.793 Delete I/O Submission Queue (00h): Supported 00:13:48.793 Create I/O Submission Queue (01h): Supported 00:13:48.793 Get Log Page (02h): Supported 00:13:48.793 Delete I/O Completion Queue (04h): Supported 00:13:48.793 Create I/O Completion Queue (05h): Supported 00:13:48.793 Identify (06h): Supported 00:13:48.793 Abort (08h): Supported 00:13:48.793 Set Features (09h): Supported 00:13:48.793 Get Features (0Ah): Supported 00:13:48.793 Asynchronous Event Request (0Ch): Supported 00:13:48.793 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:48.793 Directive Send (19h): Supported 00:13:48.793 Directive Receive (1Ah): Supported 00:13:48.793 Virtualization Management (1Ch): Supported 00:13:48.793 Doorbell Buffer Config (7Ch): Supported 00:13:48.793 Format NVM (80h): Supported LBA-Change 00:13:48.793 I/O Commands 00:13:48.793 ------------ 00:13:48.793 Flush (00h): Supported LBA-Change 00:13:48.793 Write (01h): Supported LBA-Change 00:13:48.793 Read (02h): Supported 00:13:48.793 Compare (05h): Supported 00:13:48.793 Write Zeroes (08h): Supported LBA-Change 00:13:48.793 Dataset Management (09h): Supported LBA-Change 00:13:48.793 Unknown (0Ch): Supported 00:13:48.793 Unknown (12h): Supported 00:13:48.793 Copy (19h): Supported LBA-Change 00:13:48.793 Unknown (1Dh): Supported LBA-Change 00:13:48.793 00:13:48.793 Error Log 00:13:48.793 ========= 00:13:48.793 00:13:48.793 Arbitration 00:13:48.793 =========== 00:13:48.793 Arbitration Burst: no limit 00:13:48.793 00:13:48.793 Power Management 00:13:48.793 ================ 00:13:48.793 Number of Power States: 1 00:13:48.793 Current Power State: Power State #0 00:13:48.793 Power State #0: 00:13:48.793 Max Power: 25.00 W 00:13:48.793 Non-Operational State: Operational 00:13:48.793 Entry Latency: 16 microseconds 00:13:48.793 Exit Latency: 4 microseconds 00:13:48.793 Relative Read Throughput: 0 00:13:48.793 Relative Read Latency: 0 00:13:48.793 Relative Write Throughput: 0 00:13:48.793 Relative Write Latency: 0 00:13:48.793 Idle Power: Not Reported 00:13:48.793 Active Power: Not Reported 00:13:48.793 Non-Operational Permissive Mode: Not Supported 00:13:48.793 00:13:48.793 Health Information 00:13:48.793 ================== 00:13:48.793 Critical Warnings: 00:13:48.793 Available Spare Space: OK 00:13:48.793 Temperature: OK 00:13:48.793 Device Reliability: OK 00:13:48.793 Read Only: No 00:13:48.793 Volatile Memory Backup: OK 00:13:48.793 Current Temperature: 323 Kelvin (50 Celsius) 00:13:48.793 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:48.793 Available Spare: 0% 00:13:48.793 Available Spare Threshold: 0% 00:13:48.793 Life Percentage Used: 0% 00:13:48.793 Data Units Read: 2147 00:13:48.794 Data Units Written: 1827 00:13:48.794 Host Read Commands: 98236 00:13:48.794 Host Write Commands: 94006 00:13:48.794 Controller Busy Time: 0 minutes 00:13:48.794 Power Cycles: 0 00:13:48.794 Power On Hours: 0 hours 00:13:48.794 Unsafe Shutdowns: 0 00:13:48.794 Unrecoverable Media Errors: 0 00:13:48.794 Lifetime Error Log Entries: 0 00:13:48.794 Warning Temperature Time: 0 minutes 00:13:48.794 Critical Temperature Time: 0 minutes 00:13:48.794 00:13:48.794 Number of Queues 00:13:48.794 ================ 00:13:48.794 Number of I/O Submission Queues: 64 00:13:48.794 Number of I/O Completion Queues: 64 00:13:48.794 00:13:48.794 ZNS Specific Controller Data 00:13:48.794 ============================ 00:13:48.794 Zone Append Size Limit: 0 00:13:48.794 00:13:48.794 00:13:48.794 Active Namespaces 00:13:48.794 ================= 00:13:48.794 Namespace ID:1 00:13:48.794 Error Recovery Timeout: Unlimited 00:13:48.794 Command Set Identifier: NVM (00h) 00:13:48.794 Deallocate: Supported 00:13:48.794 Deallocated/Unwritten Error: Supported 00:13:48.794 Deallocated Read Value: All 0x00 00:13:48.794 Deallocate in Write Zeroes: Not Supported 00:13:48.794 Deallocated Guard Field: 0xFFFF 00:13:48.794 Flush: Supported 00:13:48.794 Reservation: Not Supported 00:13:48.794 Namespace Sharing Capabilities: Private 00:13:48.794 Size (in LBAs): 1048576 (4GiB) 00:13:48.794 Capacity (in LBAs): 1048576 (4GiB) 00:13:48.794 Utilization (in LBAs): 1048576 (4GiB) 00:13:48.794 Thin Provisioning: Not Supported 00:13:48.794 Per-NS Atomic Units: No 00:13:48.794 Maximum Single Source Range Length: 128 00:13:48.794 Maximum Copy Length: 128 00:13:48.794 Maximum Source Range Count: 128 00:13:48.794 NGUID/EUI64 Never Reused: No 00:13:48.794 Namespace Write Protected: No 00:13:48.794 Number of LBA Formats: 8 00:13:48.794 Current LBA Format: LBA Format #04 00:13:48.794 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:48.794 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:48.794 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:48.794 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:48.794 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:48.794 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:48.794 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:48.794 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:48.794 00:13:48.794 NVM Specific Namespace Data 00:13:48.794 =========================== 00:13:48.794 Logical Block Storage Tag Mask: 0 00:13:48.794 Protection Information Capabilities: 00:13:48.794 16b Guard Protection Information Storage Tag Support: No 00:13:48.794 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:48.794 Storage Tag Check Read Support: No 00:13:48.794 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.794 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.794 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.794 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.794 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.794 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.794 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.794 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.794 Namespace ID:2 00:13:48.794 Error Recovery Timeout: Unlimited 00:13:48.794 Command Set Identifier: NVM (00h) 00:13:48.794 Deallocate: Supported 00:13:48.794 Deallocated/Unwritten Error: Supported 00:13:48.794 Deallocated Read Value: All 0x00 00:13:48.794 Deallocate in Write Zeroes: Not Supported 00:13:48.794 Deallocated Guard Field: 0xFFFF 00:13:48.794 Flush: Supported 00:13:48.794 Reservation: Not Supported 00:13:48.794 Namespace Sharing Capabilities: Private 00:13:48.794 Size (in LBAs): 1048576 (4GiB) 00:13:48.794 Capacity (in LBAs): 1048576 (4GiB) 00:13:48.794 Utilization (in LBAs): 1048576 (4GiB) 00:13:48.794 Thin Provisioning: Not Supported 00:13:48.794 Per-NS Atomic Units: No 00:13:48.794 Maximum Single Source Range Length: 128 00:13:48.794 Maximum Copy Length: 128 00:13:48.794 Maximum Source Range Count: 128 00:13:48.794 NGUID/EUI64 Never Reused: No 00:13:48.794 Namespace Write Protected: No 00:13:48.794 Number of LBA Formats: 8 00:13:48.794 Current LBA Format: LBA Format #04 00:13:48.794 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:48.794 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:48.794 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:48.794 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:48.794 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:48.794 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:48.794 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:48.794 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:48.794 00:13:48.794 NVM Specific Namespace Data 00:13:48.794 =========================== 00:13:48.794 Logical Block Storage Tag Mask: 0 00:13:48.794 Protection Information Capabilities: 00:13:48.794 16b Guard Protection Information Storage Tag Support: No 00:13:48.794 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:48.794 Storage Tag Check Read Support: No 00:13:48.794 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.794 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.794 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.794 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.794 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.794 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.794 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.794 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.794 Namespace ID:3 00:13:48.794 Error Recovery Timeout: Unlimited 00:13:48.794 Command Set Identifier: NVM (00h) 00:13:48.794 Deallocate: Supported 00:13:48.794 Deallocated/Unwritten Error: Supported 00:13:48.794 Deallocated Read Value: All 0x00 00:13:48.794 Deallocate in Write Zeroes: Not Supported 00:13:48.794 Deallocated Guard Field: 0xFFFF 00:13:48.794 Flush: Supported 00:13:48.794 Reservation: Not Supported 00:13:48.794 Namespace Sharing Capabilities: Private 00:13:48.794 Size (in LBAs): 1048576 (4GiB) 00:13:48.794 Capacity (in LBAs): 1048576 (4GiB) 00:13:48.794 Utilization (in LBAs): 1048576 (4GiB) 00:13:48.794 Thin Provisioning: Not Supported 00:13:48.794 Per-NS Atomic Units: No 00:13:48.794 Maximum Single Source Range Length: 128 00:13:48.794 Maximum Copy Length: 128 00:13:48.794 Maximum Source Range Count: 128 00:13:48.794 NGUID/EUI64 Never Reused: No 00:13:48.794 Namespace Write Protected: No 00:13:48.794 Number of LBA Formats: 8 00:13:48.794 Current LBA Format: LBA Format #04 00:13:48.794 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:48.794 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:48.794 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:48.794 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:48.794 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:48.794 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:48.794 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:48.794 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:48.794 00:13:48.794 NVM Specific Namespace Data 00:13:48.794 =========================== 00:13:48.794 Logical Block Storage Tag Mask: 0 00:13:48.794 Protection Information Capabilities: 00:13:48.794 16b Guard Protection Information Storage Tag Support: No 00:13:48.794 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:48.794 Storage Tag Check Read Support: No 00:13:48.794 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.794 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.794 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.794 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.794 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.794 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.794 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.794 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:48.794 03:42:03 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:13:48.794 03:42:03 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:13:49.053 ===================================================== 00:13:49.053 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:49.053 ===================================================== 00:13:49.053 Controller Capabilities/Features 00:13:49.053 ================================ 00:13:49.053 Vendor ID: 1b36 00:13:49.053 Subsystem Vendor ID: 1af4 00:13:49.053 Serial Number: 12340 00:13:49.053 Model Number: QEMU NVMe Ctrl 00:13:49.053 Firmware Version: 8.0.0 00:13:49.053 Recommended Arb Burst: 6 00:13:49.053 IEEE OUI Identifier: 00 54 52 00:13:49.053 Multi-path I/O 00:13:49.053 May have multiple subsystem ports: No 00:13:49.053 May have multiple controllers: No 00:13:49.053 Associated with SR-IOV VF: No 00:13:49.053 Max Data Transfer Size: 524288 00:13:49.053 Max Number of Namespaces: 256 00:13:49.053 Max Number of I/O Queues: 64 00:13:49.053 NVMe Specification Version (VS): 1.4 00:13:49.053 NVMe Specification Version (Identify): 1.4 00:13:49.053 Maximum Queue Entries: 2048 00:13:49.053 Contiguous Queues Required: Yes 00:13:49.053 Arbitration Mechanisms Supported 00:13:49.053 Weighted Round Robin: Not Supported 00:13:49.053 Vendor Specific: Not Supported 00:13:49.053 Reset Timeout: 7500 ms 00:13:49.053 Doorbell Stride: 4 bytes 00:13:49.053 NVM Subsystem Reset: Not Supported 00:13:49.053 Command Sets Supported 00:13:49.053 NVM Command Set: Supported 00:13:49.053 Boot Partition: Not Supported 00:13:49.053 Memory Page Size Minimum: 4096 bytes 00:13:49.053 Memory Page Size Maximum: 65536 bytes 00:13:49.053 Persistent Memory Region: Not Supported 00:13:49.053 Optional Asynchronous Events Supported 00:13:49.053 Namespace Attribute Notices: Supported 00:13:49.053 Firmware Activation Notices: Not Supported 00:13:49.053 ANA Change Notices: Not Supported 00:13:49.053 PLE Aggregate Log Change Notices: Not Supported 00:13:49.053 LBA Status Info Alert Notices: Not Supported 00:13:49.053 EGE Aggregate Log Change Notices: Not Supported 00:13:49.053 Normal NVM Subsystem Shutdown event: Not Supported 00:13:49.053 Zone Descriptor Change Notices: Not Supported 00:13:49.053 Discovery Log Change Notices: Not Supported 00:13:49.053 Controller Attributes 00:13:49.053 128-bit Host Identifier: Not Supported 00:13:49.053 Non-Operational Permissive Mode: Not Supported 00:13:49.053 NVM Sets: Not Supported 00:13:49.053 Read Recovery Levels: Not Supported 00:13:49.053 Endurance Groups: Not Supported 00:13:49.053 Predictable Latency Mode: Not Supported 00:13:49.053 Traffic Based Keep ALive: Not Supported 00:13:49.053 Namespace Granularity: Not Supported 00:13:49.053 SQ Associations: Not Supported 00:13:49.053 UUID List: Not Supported 00:13:49.053 Multi-Domain Subsystem: Not Supported 00:13:49.053 Fixed Capacity Management: Not Supported 00:13:49.053 Variable Capacity Management: Not Supported 00:13:49.053 Delete Endurance Group: Not Supported 00:13:49.053 Delete NVM Set: Not Supported 00:13:49.053 Extended LBA Formats Supported: Supported 00:13:49.053 Flexible Data Placement Supported: Not Supported 00:13:49.053 00:13:49.053 Controller Memory Buffer Support 00:13:49.053 ================================ 00:13:49.053 Supported: No 00:13:49.053 00:13:49.053 Persistent Memory Region Support 00:13:49.053 ================================ 00:13:49.053 Supported: No 00:13:49.053 00:13:49.053 Admin Command Set Attributes 00:13:49.053 ============================ 00:13:49.053 Security Send/Receive: Not Supported 00:13:49.053 Format NVM: Supported 00:13:49.053 Firmware Activate/Download: Not Supported 00:13:49.053 Namespace Management: Supported 00:13:49.053 Device Self-Test: Not Supported 00:13:49.053 Directives: Supported 00:13:49.053 NVMe-MI: Not Supported 00:13:49.053 Virtualization Management: Not Supported 00:13:49.053 Doorbell Buffer Config: Supported 00:13:49.053 Get LBA Status Capability: Not Supported 00:13:49.053 Command & Feature Lockdown Capability: Not Supported 00:13:49.053 Abort Command Limit: 4 00:13:49.053 Async Event Request Limit: 4 00:13:49.053 Number of Firmware Slots: N/A 00:13:49.053 Firmware Slot 1 Read-Only: N/A 00:13:49.053 Firmware Activation Without Reset: N/A 00:13:49.053 Multiple Update Detection Support: N/A 00:13:49.053 Firmware Update Granularity: No Information Provided 00:13:49.053 Per-Namespace SMART Log: Yes 00:13:49.053 Asymmetric Namespace Access Log Page: Not Supported 00:13:49.053 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:13:49.053 Command Effects Log Page: Supported 00:13:49.053 Get Log Page Extended Data: Supported 00:13:49.053 Telemetry Log Pages: Not Supported 00:13:49.053 Persistent Event Log Pages: Not Supported 00:13:49.053 Supported Log Pages Log Page: May Support 00:13:49.053 Commands Supported & Effects Log Page: Not Supported 00:13:49.053 Feature Identifiers & Effects Log Page:May Support 00:13:49.053 NVMe-MI Commands & Effects Log Page: May Support 00:13:49.053 Data Area 4 for Telemetry Log: Not Supported 00:13:49.053 Error Log Page Entries Supported: 1 00:13:49.053 Keep Alive: Not Supported 00:13:49.053 00:13:49.054 NVM Command Set Attributes 00:13:49.054 ========================== 00:13:49.054 Submission Queue Entry Size 00:13:49.054 Max: 64 00:13:49.054 Min: 64 00:13:49.054 Completion Queue Entry Size 00:13:49.054 Max: 16 00:13:49.054 Min: 16 00:13:49.054 Number of Namespaces: 256 00:13:49.054 Compare Command: Supported 00:13:49.054 Write Uncorrectable Command: Not Supported 00:13:49.054 Dataset Management Command: Supported 00:13:49.054 Write Zeroes Command: Supported 00:13:49.054 Set Features Save Field: Supported 00:13:49.054 Reservations: Not Supported 00:13:49.054 Timestamp: Supported 00:13:49.054 Copy: Supported 00:13:49.054 Volatile Write Cache: Present 00:13:49.054 Atomic Write Unit (Normal): 1 00:13:49.054 Atomic Write Unit (PFail): 1 00:13:49.054 Atomic Compare & Write Unit: 1 00:13:49.054 Fused Compare & Write: Not Supported 00:13:49.054 Scatter-Gather List 00:13:49.054 SGL Command Set: Supported 00:13:49.054 SGL Keyed: Not Supported 00:13:49.054 SGL Bit Bucket Descriptor: Not Supported 00:13:49.054 SGL Metadata Pointer: Not Supported 00:13:49.054 Oversized SGL: Not Supported 00:13:49.054 SGL Metadata Address: Not Supported 00:13:49.054 SGL Offset: Not Supported 00:13:49.054 Transport SGL Data Block: Not Supported 00:13:49.054 Replay Protected Memory Block: Not Supported 00:13:49.054 00:13:49.054 Firmware Slot Information 00:13:49.054 ========================= 00:13:49.054 Active slot: 1 00:13:49.054 Slot 1 Firmware Revision: 1.0 00:13:49.054 00:13:49.054 00:13:49.054 Commands Supported and Effects 00:13:49.054 ============================== 00:13:49.054 Admin Commands 00:13:49.054 -------------- 00:13:49.054 Delete I/O Submission Queue (00h): Supported 00:13:49.054 Create I/O Submission Queue (01h): Supported 00:13:49.054 Get Log Page (02h): Supported 00:13:49.054 Delete I/O Completion Queue (04h): Supported 00:13:49.054 Create I/O Completion Queue (05h): Supported 00:13:49.054 Identify (06h): Supported 00:13:49.054 Abort (08h): Supported 00:13:49.054 Set Features (09h): Supported 00:13:49.054 Get Features (0Ah): Supported 00:13:49.054 Asynchronous Event Request (0Ch): Supported 00:13:49.054 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:49.054 Directive Send (19h): Supported 00:13:49.054 Directive Receive (1Ah): Supported 00:13:49.054 Virtualization Management (1Ch): Supported 00:13:49.054 Doorbell Buffer Config (7Ch): Supported 00:13:49.054 Format NVM (80h): Supported LBA-Change 00:13:49.054 I/O Commands 00:13:49.054 ------------ 00:13:49.054 Flush (00h): Supported LBA-Change 00:13:49.054 Write (01h): Supported LBA-Change 00:13:49.054 Read (02h): Supported 00:13:49.054 Compare (05h): Supported 00:13:49.054 Write Zeroes (08h): Supported LBA-Change 00:13:49.054 Dataset Management (09h): Supported LBA-Change 00:13:49.054 Unknown (0Ch): Supported 00:13:49.054 Unknown (12h): Supported 00:13:49.054 Copy (19h): Supported LBA-Change 00:13:49.054 Unknown (1Dh): Supported LBA-Change 00:13:49.054 00:13:49.054 Error Log 00:13:49.054 ========= 00:13:49.054 00:13:49.054 Arbitration 00:13:49.054 =========== 00:13:49.054 Arbitration Burst: no limit 00:13:49.054 00:13:49.054 Power Management 00:13:49.054 ================ 00:13:49.054 Number of Power States: 1 00:13:49.054 Current Power State: Power State #0 00:13:49.054 Power State #0: 00:13:49.054 Max Power: 25.00 W 00:13:49.054 Non-Operational State: Operational 00:13:49.054 Entry Latency: 16 microseconds 00:13:49.054 Exit Latency: 4 microseconds 00:13:49.054 Relative Read Throughput: 0 00:13:49.054 Relative Read Latency: 0 00:13:49.054 Relative Write Throughput: 0 00:13:49.054 Relative Write Latency: 0 00:13:49.054 Idle Power: Not Reported 00:13:49.054 Active Power: Not Reported 00:13:49.054 Non-Operational Permissive Mode: Not Supported 00:13:49.054 00:13:49.054 Health Information 00:13:49.054 ================== 00:13:49.054 Critical Warnings: 00:13:49.054 Available Spare Space: OK 00:13:49.054 Temperature: OK 00:13:49.054 Device Reliability: OK 00:13:49.054 Read Only: No 00:13:49.054 Volatile Memory Backup: OK 00:13:49.054 Current Temperature: 323 Kelvin (50 Celsius) 00:13:49.054 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:49.054 Available Spare: 0% 00:13:49.054 Available Spare Threshold: 0% 00:13:49.054 Life Percentage Used: 0% 00:13:49.054 Data Units Read: 664 00:13:49.054 Data Units Written: 556 00:13:49.054 Host Read Commands: 31952 00:13:49.054 Host Write Commands: 30990 00:13:49.054 Controller Busy Time: 0 minutes 00:13:49.054 Power Cycles: 0 00:13:49.054 Power On Hours: 0 hours 00:13:49.054 Unsafe Shutdowns: 0 00:13:49.054 Unrecoverable Media Errors: 0 00:13:49.054 Lifetime Error Log Entries: 0 00:13:49.054 Warning Temperature Time: 0 minutes 00:13:49.054 Critical Temperature Time: 0 minutes 00:13:49.054 00:13:49.054 Number of Queues 00:13:49.054 ================ 00:13:49.054 Number of I/O Submission Queues: 64 00:13:49.054 Number of I/O Completion Queues: 64 00:13:49.054 00:13:49.054 ZNS Specific Controller Data 00:13:49.054 ============================ 00:13:49.054 Zone Append Size Limit: 0 00:13:49.054 00:13:49.054 00:13:49.054 Active Namespaces 00:13:49.054 ================= 00:13:49.054 Namespace ID:1 00:13:49.054 Error Recovery Timeout: Unlimited 00:13:49.054 Command Set Identifier: NVM (00h) 00:13:49.054 Deallocate: Supported 00:13:49.054 Deallocated/Unwritten Error: Supported 00:13:49.054 Deallocated Read Value: All 0x00 00:13:49.054 Deallocate in Write Zeroes: Not Supported 00:13:49.054 Deallocated Guard Field: 0xFFFF 00:13:49.054 Flush: Supported 00:13:49.054 Reservation: Not Supported 00:13:49.054 Metadata Transferred as: Separate Metadata Buffer 00:13:49.054 Namespace Sharing Capabilities: Private 00:13:49.054 Size (in LBAs): 1548666 (5GiB) 00:13:49.054 Capacity (in LBAs): 1548666 (5GiB) 00:13:49.054 Utilization (in LBAs): 1548666 (5GiB) 00:13:49.054 Thin Provisioning: Not Supported 00:13:49.054 Per-NS Atomic Units: No 00:13:49.054 Maximum Single Source Range Length: 128 00:13:49.054 Maximum Copy Length: 128 00:13:49.054 Maximum Source Range Count: 128 00:13:49.054 NGUID/EUI64 Never Reused: No 00:13:49.054 Namespace Write Protected: No 00:13:49.054 Number of LBA Formats: 8 00:13:49.054 Current LBA Format: LBA Format #07 00:13:49.054 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:49.054 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:49.054 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:49.054 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:49.054 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:49.054 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:49.054 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:49.054 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:49.054 00:13:49.054 NVM Specific Namespace Data 00:13:49.054 =========================== 00:13:49.055 Logical Block Storage Tag Mask: 0 00:13:49.055 Protection Information Capabilities: 00:13:49.055 16b Guard Protection Information Storage Tag Support: No 00:13:49.055 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:49.055 Storage Tag Check Read Support: No 00:13:49.055 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.055 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.055 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.055 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.055 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.055 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.055 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.055 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.055 03:42:03 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:13:49.055 03:42:03 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:13:49.314 ===================================================== 00:13:49.314 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:49.314 ===================================================== 00:13:49.314 Controller Capabilities/Features 00:13:49.314 ================================ 00:13:49.314 Vendor ID: 1b36 00:13:49.314 Subsystem Vendor ID: 1af4 00:13:49.315 Serial Number: 12341 00:13:49.315 Model Number: QEMU NVMe Ctrl 00:13:49.315 Firmware Version: 8.0.0 00:13:49.315 Recommended Arb Burst: 6 00:13:49.315 IEEE OUI Identifier: 00 54 52 00:13:49.315 Multi-path I/O 00:13:49.315 May have multiple subsystem ports: No 00:13:49.315 May have multiple controllers: No 00:13:49.315 Associated with SR-IOV VF: No 00:13:49.315 Max Data Transfer Size: 524288 00:13:49.315 Max Number of Namespaces: 256 00:13:49.315 Max Number of I/O Queues: 64 00:13:49.315 NVMe Specification Version (VS): 1.4 00:13:49.315 NVMe Specification Version (Identify): 1.4 00:13:49.315 Maximum Queue Entries: 2048 00:13:49.315 Contiguous Queues Required: Yes 00:13:49.315 Arbitration Mechanisms Supported 00:13:49.315 Weighted Round Robin: Not Supported 00:13:49.315 Vendor Specific: Not Supported 00:13:49.315 Reset Timeout: 7500 ms 00:13:49.315 Doorbell Stride: 4 bytes 00:13:49.315 NVM Subsystem Reset: Not Supported 00:13:49.315 Command Sets Supported 00:13:49.315 NVM Command Set: Supported 00:13:49.315 Boot Partition: Not Supported 00:13:49.315 Memory Page Size Minimum: 4096 bytes 00:13:49.315 Memory Page Size Maximum: 65536 bytes 00:13:49.315 Persistent Memory Region: Not Supported 00:13:49.315 Optional Asynchronous Events Supported 00:13:49.315 Namespace Attribute Notices: Supported 00:13:49.315 Firmware Activation Notices: Not Supported 00:13:49.315 ANA Change Notices: Not Supported 00:13:49.315 PLE Aggregate Log Change Notices: Not Supported 00:13:49.315 LBA Status Info Alert Notices: Not Supported 00:13:49.315 EGE Aggregate Log Change Notices: Not Supported 00:13:49.315 Normal NVM Subsystem Shutdown event: Not Supported 00:13:49.315 Zone Descriptor Change Notices: Not Supported 00:13:49.315 Discovery Log Change Notices: Not Supported 00:13:49.315 Controller Attributes 00:13:49.315 128-bit Host Identifier: Not Supported 00:13:49.315 Non-Operational Permissive Mode: Not Supported 00:13:49.315 NVM Sets: Not Supported 00:13:49.315 Read Recovery Levels: Not Supported 00:13:49.315 Endurance Groups: Not Supported 00:13:49.315 Predictable Latency Mode: Not Supported 00:13:49.315 Traffic Based Keep ALive: Not Supported 00:13:49.315 Namespace Granularity: Not Supported 00:13:49.315 SQ Associations: Not Supported 00:13:49.315 UUID List: Not Supported 00:13:49.315 Multi-Domain Subsystem: Not Supported 00:13:49.315 Fixed Capacity Management: Not Supported 00:13:49.315 Variable Capacity Management: Not Supported 00:13:49.315 Delete Endurance Group: Not Supported 00:13:49.315 Delete NVM Set: Not Supported 00:13:49.315 Extended LBA Formats Supported: Supported 00:13:49.315 Flexible Data Placement Supported: Not Supported 00:13:49.315 00:13:49.315 Controller Memory Buffer Support 00:13:49.315 ================================ 00:13:49.315 Supported: No 00:13:49.315 00:13:49.315 Persistent Memory Region Support 00:13:49.315 ================================ 00:13:49.315 Supported: No 00:13:49.315 00:13:49.315 Admin Command Set Attributes 00:13:49.315 ============================ 00:13:49.315 Security Send/Receive: Not Supported 00:13:49.315 Format NVM: Supported 00:13:49.315 Firmware Activate/Download: Not Supported 00:13:49.315 Namespace Management: Supported 00:13:49.315 Device Self-Test: Not Supported 00:13:49.315 Directives: Supported 00:13:49.315 NVMe-MI: Not Supported 00:13:49.315 Virtualization Management: Not Supported 00:13:49.315 Doorbell Buffer Config: Supported 00:13:49.315 Get LBA Status Capability: Not Supported 00:13:49.315 Command & Feature Lockdown Capability: Not Supported 00:13:49.315 Abort Command Limit: 4 00:13:49.315 Async Event Request Limit: 4 00:13:49.315 Number of Firmware Slots: N/A 00:13:49.315 Firmware Slot 1 Read-Only: N/A 00:13:49.315 Firmware Activation Without Reset: N/A 00:13:49.315 Multiple Update Detection Support: N/A 00:13:49.315 Firmware Update Granularity: No Information Provided 00:13:49.315 Per-Namespace SMART Log: Yes 00:13:49.315 Asymmetric Namespace Access Log Page: Not Supported 00:13:49.315 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:13:49.315 Command Effects Log Page: Supported 00:13:49.315 Get Log Page Extended Data: Supported 00:13:49.315 Telemetry Log Pages: Not Supported 00:13:49.315 Persistent Event Log Pages: Not Supported 00:13:49.315 Supported Log Pages Log Page: May Support 00:13:49.315 Commands Supported & Effects Log Page: Not Supported 00:13:49.315 Feature Identifiers & Effects Log Page:May Support 00:13:49.315 NVMe-MI Commands & Effects Log Page: May Support 00:13:49.315 Data Area 4 for Telemetry Log: Not Supported 00:13:49.315 Error Log Page Entries Supported: 1 00:13:49.315 Keep Alive: Not Supported 00:13:49.315 00:13:49.315 NVM Command Set Attributes 00:13:49.315 ========================== 00:13:49.315 Submission Queue Entry Size 00:13:49.315 Max: 64 00:13:49.315 Min: 64 00:13:49.315 Completion Queue Entry Size 00:13:49.315 Max: 16 00:13:49.315 Min: 16 00:13:49.315 Number of Namespaces: 256 00:13:49.315 Compare Command: Supported 00:13:49.315 Write Uncorrectable Command: Not Supported 00:13:49.315 Dataset Management Command: Supported 00:13:49.316 Write Zeroes Command: Supported 00:13:49.316 Set Features Save Field: Supported 00:13:49.316 Reservations: Not Supported 00:13:49.316 Timestamp: Supported 00:13:49.316 Copy: Supported 00:13:49.316 Volatile Write Cache: Present 00:13:49.316 Atomic Write Unit (Normal): 1 00:13:49.316 Atomic Write Unit (PFail): 1 00:13:49.316 Atomic Compare & Write Unit: 1 00:13:49.316 Fused Compare & Write: Not Supported 00:13:49.316 Scatter-Gather List 00:13:49.316 SGL Command Set: Supported 00:13:49.316 SGL Keyed: Not Supported 00:13:49.316 SGL Bit Bucket Descriptor: Not Supported 00:13:49.316 SGL Metadata Pointer: Not Supported 00:13:49.316 Oversized SGL: Not Supported 00:13:49.316 SGL Metadata Address: Not Supported 00:13:49.316 SGL Offset: Not Supported 00:13:49.316 Transport SGL Data Block: Not Supported 00:13:49.316 Replay Protected Memory Block: Not Supported 00:13:49.316 00:13:49.316 Firmware Slot Information 00:13:49.316 ========================= 00:13:49.316 Active slot: 1 00:13:49.316 Slot 1 Firmware Revision: 1.0 00:13:49.316 00:13:49.316 00:13:49.316 Commands Supported and Effects 00:13:49.316 ============================== 00:13:49.316 Admin Commands 00:13:49.316 -------------- 00:13:49.316 Delete I/O Submission Queue (00h): Supported 00:13:49.316 Create I/O Submission Queue (01h): Supported 00:13:49.316 Get Log Page (02h): Supported 00:13:49.316 Delete I/O Completion Queue (04h): Supported 00:13:49.316 Create I/O Completion Queue (05h): Supported 00:13:49.316 Identify (06h): Supported 00:13:49.316 Abort (08h): Supported 00:13:49.316 Set Features (09h): Supported 00:13:49.316 Get Features (0Ah): Supported 00:13:49.316 Asynchronous Event Request (0Ch): Supported 00:13:49.316 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:49.316 Directive Send (19h): Supported 00:13:49.316 Directive Receive (1Ah): Supported 00:13:49.316 Virtualization Management (1Ch): Supported 00:13:49.316 Doorbell Buffer Config (7Ch): Supported 00:13:49.316 Format NVM (80h): Supported LBA-Change 00:13:49.316 I/O Commands 00:13:49.316 ------------ 00:13:49.316 Flush (00h): Supported LBA-Change 00:13:49.316 Write (01h): Supported LBA-Change 00:13:49.316 Read (02h): Supported 00:13:49.316 Compare (05h): Supported 00:13:49.316 Write Zeroes (08h): Supported LBA-Change 00:13:49.316 Dataset Management (09h): Supported LBA-Change 00:13:49.316 Unknown (0Ch): Supported 00:13:49.316 Unknown (12h): Supported 00:13:49.316 Copy (19h): Supported LBA-Change 00:13:49.316 Unknown (1Dh): Supported LBA-Change 00:13:49.316 00:13:49.316 Error Log 00:13:49.316 ========= 00:13:49.316 00:13:49.316 Arbitration 00:13:49.316 =========== 00:13:49.316 Arbitration Burst: no limit 00:13:49.316 00:13:49.316 Power Management 00:13:49.316 ================ 00:13:49.316 Number of Power States: 1 00:13:49.316 Current Power State: Power State #0 00:13:49.316 Power State #0: 00:13:49.316 Max Power: 25.00 W 00:13:49.316 Non-Operational State: Operational 00:13:49.316 Entry Latency: 16 microseconds 00:13:49.316 Exit Latency: 4 microseconds 00:13:49.316 Relative Read Throughput: 0 00:13:49.316 Relative Read Latency: 0 00:13:49.316 Relative Write Throughput: 0 00:13:49.316 Relative Write Latency: 0 00:13:49.316 Idle Power: Not Reported 00:13:49.316 Active Power: Not Reported 00:13:49.316 Non-Operational Permissive Mode: Not Supported 00:13:49.316 00:13:49.316 Health Information 00:13:49.316 ================== 00:13:49.316 Critical Warnings: 00:13:49.316 Available Spare Space: OK 00:13:49.316 Temperature: OK 00:13:49.316 Device Reliability: OK 00:13:49.316 Read Only: No 00:13:49.316 Volatile Memory Backup: OK 00:13:49.316 Current Temperature: 323 Kelvin (50 Celsius) 00:13:49.316 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:49.316 Available Spare: 0% 00:13:49.316 Available Spare Threshold: 0% 00:13:49.316 Life Percentage Used: 0% 00:13:49.316 Data Units Read: 1078 00:13:49.316 Data Units Written: 861 00:13:49.316 Host Read Commands: 48063 00:13:49.316 Host Write Commands: 45069 00:13:49.316 Controller Busy Time: 0 minutes 00:13:49.316 Power Cycles: 0 00:13:49.316 Power On Hours: 0 hours 00:13:49.316 Unsafe Shutdowns: 0 00:13:49.316 Unrecoverable Media Errors: 0 00:13:49.316 Lifetime Error Log Entries: 0 00:13:49.316 Warning Temperature Time: 0 minutes 00:13:49.316 Critical Temperature Time: 0 minutes 00:13:49.316 00:13:49.316 Number of Queues 00:13:49.316 ================ 00:13:49.316 Number of I/O Submission Queues: 64 00:13:49.316 Number of I/O Completion Queues: 64 00:13:49.316 00:13:49.316 ZNS Specific Controller Data 00:13:49.316 ============================ 00:13:49.316 Zone Append Size Limit: 0 00:13:49.316 00:13:49.316 00:13:49.316 Active Namespaces 00:13:49.316 ================= 00:13:49.316 Namespace ID:1 00:13:49.316 Error Recovery Timeout: Unlimited 00:13:49.317 Command Set Identifier: NVM (00h) 00:13:49.317 Deallocate: Supported 00:13:49.317 Deallocated/Unwritten Error: Supported 00:13:49.317 Deallocated Read Value: All 0x00 00:13:49.317 Deallocate in Write Zeroes: Not Supported 00:13:49.317 Deallocated Guard Field: 0xFFFF 00:13:49.317 Flush: Supported 00:13:49.317 Reservation: Not Supported 00:13:49.317 Namespace Sharing Capabilities: Private 00:13:49.317 Size (in LBAs): 1310720 (5GiB) 00:13:49.317 Capacity (in LBAs): 1310720 (5GiB) 00:13:49.317 Utilization (in LBAs): 1310720 (5GiB) 00:13:49.317 Thin Provisioning: Not Supported 00:13:49.317 Per-NS Atomic Units: No 00:13:49.317 Maximum Single Source Range Length: 128 00:13:49.317 Maximum Copy Length: 128 00:13:49.317 Maximum Source Range Count: 128 00:13:49.317 NGUID/EUI64 Never Reused: No 00:13:49.317 Namespace Write Protected: No 00:13:49.317 Number of LBA Formats: 8 00:13:49.317 Current LBA Format: LBA Format #04 00:13:49.317 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:49.317 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:49.317 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:49.317 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:49.317 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:49.317 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:49.317 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:49.317 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:49.317 00:13:49.317 NVM Specific Namespace Data 00:13:49.317 =========================== 00:13:49.317 Logical Block Storage Tag Mask: 0 00:13:49.317 Protection Information Capabilities: 00:13:49.317 16b Guard Protection Information Storage Tag Support: No 00:13:49.317 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:49.317 Storage Tag Check Read Support: No 00:13:49.317 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.317 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.317 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.317 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.317 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.317 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.317 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.317 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.317 03:42:04 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:13:49.317 03:42:04 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:13:49.884 ===================================================== 00:13:49.884 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:49.884 ===================================================== 00:13:49.884 Controller Capabilities/Features 00:13:49.884 ================================ 00:13:49.884 Vendor ID: 1b36 00:13:49.884 Subsystem Vendor ID: 1af4 00:13:49.884 Serial Number: 12342 00:13:49.884 Model Number: QEMU NVMe Ctrl 00:13:49.884 Firmware Version: 8.0.0 00:13:49.884 Recommended Arb Burst: 6 00:13:49.884 IEEE OUI Identifier: 00 54 52 00:13:49.884 Multi-path I/O 00:13:49.884 May have multiple subsystem ports: No 00:13:49.884 May have multiple controllers: No 00:13:49.884 Associated with SR-IOV VF: No 00:13:49.884 Max Data Transfer Size: 524288 00:13:49.884 Max Number of Namespaces: 256 00:13:49.884 Max Number of I/O Queues: 64 00:13:49.884 NVMe Specification Version (VS): 1.4 00:13:49.884 NVMe Specification Version (Identify): 1.4 00:13:49.884 Maximum Queue Entries: 2048 00:13:49.884 Contiguous Queues Required: Yes 00:13:49.884 Arbitration Mechanisms Supported 00:13:49.884 Weighted Round Robin: Not Supported 00:13:49.884 Vendor Specific: Not Supported 00:13:49.884 Reset Timeout: 7500 ms 00:13:49.884 Doorbell Stride: 4 bytes 00:13:49.884 NVM Subsystem Reset: Not Supported 00:13:49.884 Command Sets Supported 00:13:49.884 NVM Command Set: Supported 00:13:49.885 Boot Partition: Not Supported 00:13:49.885 Memory Page Size Minimum: 4096 bytes 00:13:49.885 Memory Page Size Maximum: 65536 bytes 00:13:49.885 Persistent Memory Region: Not Supported 00:13:49.885 Optional Asynchronous Events Supported 00:13:49.885 Namespace Attribute Notices: Supported 00:13:49.885 Firmware Activation Notices: Not Supported 00:13:49.885 ANA Change Notices: Not Supported 00:13:49.885 PLE Aggregate Log Change Notices: Not Supported 00:13:49.885 LBA Status Info Alert Notices: Not Supported 00:13:49.885 EGE Aggregate Log Change Notices: Not Supported 00:13:49.885 Normal NVM Subsystem Shutdown event: Not Supported 00:13:49.885 Zone Descriptor Change Notices: Not Supported 00:13:49.885 Discovery Log Change Notices: Not Supported 00:13:49.885 Controller Attributes 00:13:49.885 128-bit Host Identifier: Not Supported 00:13:49.885 Non-Operational Permissive Mode: Not Supported 00:13:49.885 NVM Sets: Not Supported 00:13:49.885 Read Recovery Levels: Not Supported 00:13:49.885 Endurance Groups: Not Supported 00:13:49.885 Predictable Latency Mode: Not Supported 00:13:49.885 Traffic Based Keep ALive: Not Supported 00:13:49.885 Namespace Granularity: Not Supported 00:13:49.885 SQ Associations: Not Supported 00:13:49.885 UUID List: Not Supported 00:13:49.885 Multi-Domain Subsystem: Not Supported 00:13:49.885 Fixed Capacity Management: Not Supported 00:13:49.885 Variable Capacity Management: Not Supported 00:13:49.885 Delete Endurance Group: Not Supported 00:13:49.885 Delete NVM Set: Not Supported 00:13:49.885 Extended LBA Formats Supported: Supported 00:13:49.885 Flexible Data Placement Supported: Not Supported 00:13:49.885 00:13:49.885 Controller Memory Buffer Support 00:13:49.885 ================================ 00:13:49.885 Supported: No 00:13:49.885 00:13:49.885 Persistent Memory Region Support 00:13:49.885 ================================ 00:13:49.885 Supported: No 00:13:49.885 00:13:49.885 Admin Command Set Attributes 00:13:49.885 ============================ 00:13:49.885 Security Send/Receive: Not Supported 00:13:49.885 Format NVM: Supported 00:13:49.885 Firmware Activate/Download: Not Supported 00:13:49.885 Namespace Management: Supported 00:13:49.885 Device Self-Test: Not Supported 00:13:49.885 Directives: Supported 00:13:49.885 NVMe-MI: Not Supported 00:13:49.885 Virtualization Management: Not Supported 00:13:49.885 Doorbell Buffer Config: Supported 00:13:49.885 Get LBA Status Capability: Not Supported 00:13:49.885 Command & Feature Lockdown Capability: Not Supported 00:13:49.885 Abort Command Limit: 4 00:13:49.885 Async Event Request Limit: 4 00:13:49.885 Number of Firmware Slots: N/A 00:13:49.885 Firmware Slot 1 Read-Only: N/A 00:13:49.885 Firmware Activation Without Reset: N/A 00:13:49.885 Multiple Update Detection Support: N/A 00:13:49.885 Firmware Update Granularity: No Information Provided 00:13:49.885 Per-Namespace SMART Log: Yes 00:13:49.885 Asymmetric Namespace Access Log Page: Not Supported 00:13:49.885 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:13:49.885 Command Effects Log Page: Supported 00:13:49.885 Get Log Page Extended Data: Supported 00:13:49.885 Telemetry Log Pages: Not Supported 00:13:49.885 Persistent Event Log Pages: Not Supported 00:13:49.885 Supported Log Pages Log Page: May Support 00:13:49.885 Commands Supported & Effects Log Page: Not Supported 00:13:49.885 Feature Identifiers & Effects Log Page:May Support 00:13:49.885 NVMe-MI Commands & Effects Log Page: May Support 00:13:49.885 Data Area 4 for Telemetry Log: Not Supported 00:13:49.885 Error Log Page Entries Supported: 1 00:13:49.885 Keep Alive: Not Supported 00:13:49.885 00:13:49.885 NVM Command Set Attributes 00:13:49.885 ========================== 00:13:49.885 Submission Queue Entry Size 00:13:49.885 Max: 64 00:13:49.885 Min: 64 00:13:49.885 Completion Queue Entry Size 00:13:49.885 Max: 16 00:13:49.885 Min: 16 00:13:49.885 Number of Namespaces: 256 00:13:49.885 Compare Command: Supported 00:13:49.885 Write Uncorrectable Command: Not Supported 00:13:49.885 Dataset Management Command: Supported 00:13:49.885 Write Zeroes Command: Supported 00:13:49.885 Set Features Save Field: Supported 00:13:49.885 Reservations: Not Supported 00:13:49.885 Timestamp: Supported 00:13:49.885 Copy: Supported 00:13:49.885 Volatile Write Cache: Present 00:13:49.885 Atomic Write Unit (Normal): 1 00:13:49.885 Atomic Write Unit (PFail): 1 00:13:49.885 Atomic Compare & Write Unit: 1 00:13:49.885 Fused Compare & Write: Not Supported 00:13:49.885 Scatter-Gather List 00:13:49.885 SGL Command Set: Supported 00:13:49.885 SGL Keyed: Not Supported 00:13:49.885 SGL Bit Bucket Descriptor: Not Supported 00:13:49.885 SGL Metadata Pointer: Not Supported 00:13:49.885 Oversized SGL: Not Supported 00:13:49.885 SGL Metadata Address: Not Supported 00:13:49.885 SGL Offset: Not Supported 00:13:49.885 Transport SGL Data Block: Not Supported 00:13:49.885 Replay Protected Memory Block: Not Supported 00:13:49.885 00:13:49.885 Firmware Slot Information 00:13:49.885 ========================= 00:13:49.885 Active slot: 1 00:13:49.885 Slot 1 Firmware Revision: 1.0 00:13:49.885 00:13:49.885 00:13:49.885 Commands Supported and Effects 00:13:49.885 ============================== 00:13:49.885 Admin Commands 00:13:49.885 -------------- 00:13:49.885 Delete I/O Submission Queue (00h): Supported 00:13:49.885 Create I/O Submission Queue (01h): Supported 00:13:49.885 Get Log Page (02h): Supported 00:13:49.885 Delete I/O Completion Queue (04h): Supported 00:13:49.885 Create I/O Completion Queue (05h): Supported 00:13:49.885 Identify (06h): Supported 00:13:49.885 Abort (08h): Supported 00:13:49.885 Set Features (09h): Supported 00:13:49.885 Get Features (0Ah): Supported 00:13:49.885 Asynchronous Event Request (0Ch): Supported 00:13:49.885 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:49.885 Directive Send (19h): Supported 00:13:49.885 Directive Receive (1Ah): Supported 00:13:49.885 Virtualization Management (1Ch): Supported 00:13:49.885 Doorbell Buffer Config (7Ch): Supported 00:13:49.885 Format NVM (80h): Supported LBA-Change 00:13:49.885 I/O Commands 00:13:49.885 ------------ 00:13:49.885 Flush (00h): Supported LBA-Change 00:13:49.885 Write (01h): Supported LBA-Change 00:13:49.885 Read (02h): Supported 00:13:49.885 Compare (05h): Supported 00:13:49.885 Write Zeroes (08h): Supported LBA-Change 00:13:49.885 Dataset Management (09h): Supported LBA-Change 00:13:49.885 Unknown (0Ch): Supported 00:13:49.885 Unknown (12h): Supported 00:13:49.886 Copy (19h): Supported LBA-Change 00:13:49.886 Unknown (1Dh): Supported LBA-Change 00:13:49.886 00:13:49.886 Error Log 00:13:49.886 ========= 00:13:49.886 00:13:49.886 Arbitration 00:13:49.886 =========== 00:13:49.886 Arbitration Burst: no limit 00:13:49.886 00:13:49.886 Power Management 00:13:49.886 ================ 00:13:49.886 Number of Power States: 1 00:13:49.886 Current Power State: Power State #0 00:13:49.886 Power State #0: 00:13:49.886 Max Power: 25.00 W 00:13:49.886 Non-Operational State: Operational 00:13:49.886 Entry Latency: 16 microseconds 00:13:49.886 Exit Latency: 4 microseconds 00:13:49.886 Relative Read Throughput: 0 00:13:49.886 Relative Read Latency: 0 00:13:49.886 Relative Write Throughput: 0 00:13:49.886 Relative Write Latency: 0 00:13:49.886 Idle Power: Not Reported 00:13:49.886 Active Power: Not Reported 00:13:49.886 Non-Operational Permissive Mode: Not Supported 00:13:49.886 00:13:49.886 Health Information 00:13:49.886 ================== 00:13:49.886 Critical Warnings: 00:13:49.886 Available Spare Space: OK 00:13:49.886 Temperature: OK 00:13:49.886 Device Reliability: OK 00:13:49.886 Read Only: No 00:13:49.886 Volatile Memory Backup: OK 00:13:49.886 Current Temperature: 323 Kelvin (50 Celsius) 00:13:49.886 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:49.886 Available Spare: 0% 00:13:49.886 Available Spare Threshold: 0% 00:13:49.886 Life Percentage Used: 0% 00:13:49.886 Data Units Read: 2147 00:13:49.886 Data Units Written: 1827 00:13:49.886 Host Read Commands: 98236 00:13:49.886 Host Write Commands: 94006 00:13:49.886 Controller Busy Time: 0 minutes 00:13:49.886 Power Cycles: 0 00:13:49.886 Power On Hours: 0 hours 00:13:49.886 Unsafe Shutdowns: 0 00:13:49.886 Unrecoverable Media Errors: 0 00:13:49.886 Lifetime Error Log Entries: 0 00:13:49.886 Warning Temperature Time: 0 minutes 00:13:49.886 Critical Temperature Time: 0 minutes 00:13:49.886 00:13:49.886 Number of Queues 00:13:49.886 ================ 00:13:49.886 Number of I/O Submission Queues: 64 00:13:49.886 Number of I/O Completion Queues: 64 00:13:49.886 00:13:49.886 ZNS Specific Controller Data 00:13:49.886 ============================ 00:13:49.886 Zone Append Size Limit: 0 00:13:49.886 00:13:49.886 00:13:49.886 Active Namespaces 00:13:49.886 ================= 00:13:49.886 Namespace ID:1 00:13:49.886 Error Recovery Timeout: Unlimited 00:13:49.886 Command Set Identifier: NVM (00h) 00:13:49.886 Deallocate: Supported 00:13:49.886 Deallocated/Unwritten Error: Supported 00:13:49.886 Deallocated Read Value: All 0x00 00:13:49.886 Deallocate in Write Zeroes: Not Supported 00:13:49.886 Deallocated Guard Field: 0xFFFF 00:13:49.886 Flush: Supported 00:13:49.886 Reservation: Not Supported 00:13:49.886 Namespace Sharing Capabilities: Private 00:13:49.886 Size (in LBAs): 1048576 (4GiB) 00:13:49.886 Capacity (in LBAs): 1048576 (4GiB) 00:13:49.886 Utilization (in LBAs): 1048576 (4GiB) 00:13:49.886 Thin Provisioning: Not Supported 00:13:49.886 Per-NS Atomic Units: No 00:13:49.886 Maximum Single Source Range Length: 128 00:13:49.886 Maximum Copy Length: 128 00:13:49.886 Maximum Source Range Count: 128 00:13:49.886 NGUID/EUI64 Never Reused: No 00:13:49.886 Namespace Write Protected: No 00:13:49.886 Number of LBA Formats: 8 00:13:49.886 Current LBA Format: LBA Format #04 00:13:49.886 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:49.886 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:49.886 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:49.886 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:49.886 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:49.886 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:49.886 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:49.886 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:49.886 00:13:49.886 NVM Specific Namespace Data 00:13:49.886 =========================== 00:13:49.886 Logical Block Storage Tag Mask: 0 00:13:49.886 Protection Information Capabilities: 00:13:49.886 16b Guard Protection Information Storage Tag Support: No 00:13:49.886 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:49.886 Storage Tag Check Read Support: No 00:13:49.886 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.886 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.886 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.886 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.886 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.886 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.886 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.886 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.886 Namespace ID:2 00:13:49.886 Error Recovery Timeout: Unlimited 00:13:49.886 Command Set Identifier: NVM (00h) 00:13:49.886 Deallocate: Supported 00:13:49.886 Deallocated/Unwritten Error: Supported 00:13:49.886 Deallocated Read Value: All 0x00 00:13:49.886 Deallocate in Write Zeroes: Not Supported 00:13:49.886 Deallocated Guard Field: 0xFFFF 00:13:49.886 Flush: Supported 00:13:49.886 Reservation: Not Supported 00:13:49.886 Namespace Sharing Capabilities: Private 00:13:49.886 Size (in LBAs): 1048576 (4GiB) 00:13:49.886 Capacity (in LBAs): 1048576 (4GiB) 00:13:49.886 Utilization (in LBAs): 1048576 (4GiB) 00:13:49.886 Thin Provisioning: Not Supported 00:13:49.886 Per-NS Atomic Units: No 00:13:49.886 Maximum Single Source Range Length: 128 00:13:49.886 Maximum Copy Length: 128 00:13:49.886 Maximum Source Range Count: 128 00:13:49.886 NGUID/EUI64 Never Reused: No 00:13:49.886 Namespace Write Protected: No 00:13:49.886 Number of LBA Formats: 8 00:13:49.886 Current LBA Format: LBA Format #04 00:13:49.886 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:49.886 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:49.886 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:49.886 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:49.886 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:49.886 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:49.886 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:49.886 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:49.886 00:13:49.886 NVM Specific Namespace Data 00:13:49.886 =========================== 00:13:49.886 Logical Block Storage Tag Mask: 0 00:13:49.886 Protection Information Capabilities: 00:13:49.886 16b Guard Protection Information Storage Tag Support: No 00:13:49.886 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:49.886 Storage Tag Check Read Support: No 00:13:49.886 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.886 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.886 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.886 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.886 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.886 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.886 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.886 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.886 Namespace ID:3 00:13:49.886 Error Recovery Timeout: Unlimited 00:13:49.886 Command Set Identifier: NVM (00h) 00:13:49.886 Deallocate: Supported 00:13:49.886 Deallocated/Unwritten Error: Supported 00:13:49.886 Deallocated Read Value: All 0x00 00:13:49.886 Deallocate in Write Zeroes: Not Supported 00:13:49.886 Deallocated Guard Field: 0xFFFF 00:13:49.886 Flush: Supported 00:13:49.886 Reservation: Not Supported 00:13:49.886 Namespace Sharing Capabilities: Private 00:13:49.886 Size (in LBAs): 1048576 (4GiB) 00:13:49.886 Capacity (in LBAs): 1048576 (4GiB) 00:13:49.886 Utilization (in LBAs): 1048576 (4GiB) 00:13:49.887 Thin Provisioning: Not Supported 00:13:49.887 Per-NS Atomic Units: No 00:13:49.887 Maximum Single Source Range Length: 128 00:13:49.887 Maximum Copy Length: 128 00:13:49.887 Maximum Source Range Count: 128 00:13:49.887 NGUID/EUI64 Never Reused: No 00:13:49.887 Namespace Write Protected: No 00:13:49.887 Number of LBA Formats: 8 00:13:49.887 Current LBA Format: LBA Format #04 00:13:49.887 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:49.887 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:49.887 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:49.887 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:49.887 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:49.887 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:49.887 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:49.887 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:49.887 00:13:49.887 NVM Specific Namespace Data 00:13:49.887 =========================== 00:13:49.887 Logical Block Storage Tag Mask: 0 00:13:49.887 Protection Information Capabilities: 00:13:49.887 16b Guard Protection Information Storage Tag Support: No 00:13:49.887 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:49.887 Storage Tag Check Read Support: No 00:13:49.887 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.887 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.887 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.887 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.887 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.887 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.887 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.887 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:49.887 03:42:04 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:13:49.887 03:42:04 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:13:50.146 ===================================================== 00:13:50.146 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:50.146 ===================================================== 00:13:50.146 Controller Capabilities/Features 00:13:50.146 ================================ 00:13:50.146 Vendor ID: 1b36 00:13:50.146 Subsystem Vendor ID: 1af4 00:13:50.146 Serial Number: 12343 00:13:50.146 Model Number: QEMU NVMe Ctrl 00:13:50.146 Firmware Version: 8.0.0 00:13:50.146 Recommended Arb Burst: 6 00:13:50.146 IEEE OUI Identifier: 00 54 52 00:13:50.146 Multi-path I/O 00:13:50.146 May have multiple subsystem ports: No 00:13:50.146 May have multiple controllers: Yes 00:13:50.146 Associated with SR-IOV VF: No 00:13:50.146 Max Data Transfer Size: 524288 00:13:50.146 Max Number of Namespaces: 256 00:13:50.146 Max Number of I/O Queues: 64 00:13:50.146 NVMe Specification Version (VS): 1.4 00:13:50.146 NVMe Specification Version (Identify): 1.4 00:13:50.146 Maximum Queue Entries: 2048 00:13:50.146 Contiguous Queues Required: Yes 00:13:50.146 Arbitration Mechanisms Supported 00:13:50.146 Weighted Round Robin: Not Supported 00:13:50.146 Vendor Specific: Not Supported 00:13:50.146 Reset Timeout: 7500 ms 00:13:50.146 Doorbell Stride: 4 bytes 00:13:50.146 NVM Subsystem Reset: Not Supported 00:13:50.146 Command Sets Supported 00:13:50.146 NVM Command Set: Supported 00:13:50.146 Boot Partition: Not Supported 00:13:50.146 Memory Page Size Minimum: 4096 bytes 00:13:50.146 Memory Page Size Maximum: 65536 bytes 00:13:50.146 Persistent Memory Region: Not Supported 00:13:50.146 Optional Asynchronous Events Supported 00:13:50.146 Namespace Attribute Notices: Supported 00:13:50.146 Firmware Activation Notices: Not Supported 00:13:50.146 ANA Change Notices: Not Supported 00:13:50.146 PLE Aggregate Log Change Notices: Not Supported 00:13:50.146 LBA Status Info Alert Notices: Not Supported 00:13:50.146 EGE Aggregate Log Change Notices: Not Supported 00:13:50.146 Normal NVM Subsystem Shutdown event: Not Supported 00:13:50.146 Zone Descriptor Change Notices: Not Supported 00:13:50.146 Discovery Log Change Notices: Not Supported 00:13:50.146 Controller Attributes 00:13:50.146 128-bit Host Identifier: Not Supported 00:13:50.146 Non-Operational Permissive Mode: Not Supported 00:13:50.146 NVM Sets: Not Supported 00:13:50.146 Read Recovery Levels: Not Supported 00:13:50.146 Endurance Groups: Supported 00:13:50.146 Predictable Latency Mode: Not Supported 00:13:50.146 Traffic Based Keep ALive: Not Supported 00:13:50.146 Namespace Granularity: Not Supported 00:13:50.146 SQ Associations: Not Supported 00:13:50.146 UUID List: Not Supported 00:13:50.146 Multi-Domain Subsystem: Not Supported 00:13:50.146 Fixed Capacity Management: Not Supported 00:13:50.146 Variable Capacity Management: Not Supported 00:13:50.146 Delete Endurance Group: Not Supported 00:13:50.146 Delete NVM Set: Not Supported 00:13:50.146 Extended LBA Formats Supported: Supported 00:13:50.146 Flexible Data Placement Supported: Supported 00:13:50.146 00:13:50.146 Controller Memory Buffer Support 00:13:50.146 ================================ 00:13:50.146 Supported: No 00:13:50.146 00:13:50.146 Persistent Memory Region Support 00:13:50.146 ================================ 00:13:50.146 Supported: No 00:13:50.146 00:13:50.146 Admin Command Set Attributes 00:13:50.146 ============================ 00:13:50.146 Security Send/Receive: Not Supported 00:13:50.146 Format NVM: Supported 00:13:50.146 Firmware Activate/Download: Not Supported 00:13:50.146 Namespace Management: Supported 00:13:50.146 Device Self-Test: Not Supported 00:13:50.146 Directives: Supported 00:13:50.146 NVMe-MI: Not Supported 00:13:50.146 Virtualization Management: Not Supported 00:13:50.146 Doorbell Buffer Config: Supported 00:13:50.146 Get LBA Status Capability: Not Supported 00:13:50.146 Command & Feature Lockdown Capability: Not Supported 00:13:50.146 Abort Command Limit: 4 00:13:50.146 Async Event Request Limit: 4 00:13:50.146 Number of Firmware Slots: N/A 00:13:50.146 Firmware Slot 1 Read-Only: N/A 00:13:50.146 Firmware Activation Without Reset: N/A 00:13:50.146 Multiple Update Detection Support: N/A 00:13:50.146 Firmware Update Granularity: No Information Provided 00:13:50.146 Per-Namespace SMART Log: Yes 00:13:50.146 Asymmetric Namespace Access Log Page: Not Supported 00:13:50.146 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:13:50.146 Command Effects Log Page: Supported 00:13:50.146 Get Log Page Extended Data: Supported 00:13:50.146 Telemetry Log Pages: Not Supported 00:13:50.146 Persistent Event Log Pages: Not Supported 00:13:50.147 Supported Log Pages Log Page: May Support 00:13:50.147 Commands Supported & Effects Log Page: Not Supported 00:13:50.147 Feature Identifiers & Effects Log Page:May Support 00:13:50.147 NVMe-MI Commands & Effects Log Page: May Support 00:13:50.147 Data Area 4 for Telemetry Log: Not Supported 00:13:50.147 Error Log Page Entries Supported: 1 00:13:50.147 Keep Alive: Not Supported 00:13:50.147 00:13:50.147 NVM Command Set Attributes 00:13:50.147 ========================== 00:13:50.147 Submission Queue Entry Size 00:13:50.147 Max: 64 00:13:50.147 Min: 64 00:13:50.147 Completion Queue Entry Size 00:13:50.147 Max: 16 00:13:50.147 Min: 16 00:13:50.147 Number of Namespaces: 256 00:13:50.147 Compare Command: Supported 00:13:50.147 Write Uncorrectable Command: Not Supported 00:13:50.147 Dataset Management Command: Supported 00:13:50.147 Write Zeroes Command: Supported 00:13:50.147 Set Features Save Field: Supported 00:13:50.147 Reservations: Not Supported 00:13:50.147 Timestamp: Supported 00:13:50.147 Copy: Supported 00:13:50.147 Volatile Write Cache: Present 00:13:50.147 Atomic Write Unit (Normal): 1 00:13:50.147 Atomic Write Unit (PFail): 1 00:13:50.147 Atomic Compare & Write Unit: 1 00:13:50.147 Fused Compare & Write: Not Supported 00:13:50.147 Scatter-Gather List 00:13:50.147 SGL Command Set: Supported 00:13:50.147 SGL Keyed: Not Supported 00:13:50.147 SGL Bit Bucket Descriptor: Not Supported 00:13:50.147 SGL Metadata Pointer: Not Supported 00:13:50.147 Oversized SGL: Not Supported 00:13:50.147 SGL Metadata Address: Not Supported 00:13:50.147 SGL Offset: Not Supported 00:13:50.147 Transport SGL Data Block: Not Supported 00:13:50.147 Replay Protected Memory Block: Not Supported 00:13:50.147 00:13:50.147 Firmware Slot Information 00:13:50.147 ========================= 00:13:50.147 Active slot: 1 00:13:50.147 Slot 1 Firmware Revision: 1.0 00:13:50.147 00:13:50.147 00:13:50.147 Commands Supported and Effects 00:13:50.147 ============================== 00:13:50.147 Admin Commands 00:13:50.147 -------------- 00:13:50.147 Delete I/O Submission Queue (00h): Supported 00:13:50.147 Create I/O Submission Queue (01h): Supported 00:13:50.147 Get Log Page (02h): Supported 00:13:50.147 Delete I/O Completion Queue (04h): Supported 00:13:50.147 Create I/O Completion Queue (05h): Supported 00:13:50.147 Identify (06h): Supported 00:13:50.147 Abort (08h): Supported 00:13:50.147 Set Features (09h): Supported 00:13:50.147 Get Features (0Ah): Supported 00:13:50.147 Asynchronous Event Request (0Ch): Supported 00:13:50.147 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:50.147 Directive Send (19h): Supported 00:13:50.147 Directive Receive (1Ah): Supported 00:13:50.147 Virtualization Management (1Ch): Supported 00:13:50.147 Doorbell Buffer Config (7Ch): Supported 00:13:50.147 Format NVM (80h): Supported LBA-Change 00:13:50.147 I/O Commands 00:13:50.147 ------------ 00:13:50.147 Flush (00h): Supported LBA-Change 00:13:50.147 Write (01h): Supported LBA-Change 00:13:50.147 Read (02h): Supported 00:13:50.147 Compare (05h): Supported 00:13:50.147 Write Zeroes (08h): Supported LBA-Change 00:13:50.147 Dataset Management (09h): Supported LBA-Change 00:13:50.147 Unknown (0Ch): Supported 00:13:50.147 Unknown (12h): Supported 00:13:50.147 Copy (19h): Supported LBA-Change 00:13:50.147 Unknown (1Dh): Supported LBA-Change 00:13:50.147 00:13:50.147 Error Log 00:13:50.147 ========= 00:13:50.147 00:13:50.147 Arbitration 00:13:50.147 =========== 00:13:50.147 Arbitration Burst: no limit 00:13:50.147 00:13:50.147 Power Management 00:13:50.147 ================ 00:13:50.147 Number of Power States: 1 00:13:50.147 Current Power State: Power State #0 00:13:50.147 Power State #0: 00:13:50.147 Max Power: 25.00 W 00:13:50.147 Non-Operational State: Operational 00:13:50.147 Entry Latency: 16 microseconds 00:13:50.147 Exit Latency: 4 microseconds 00:13:50.147 Relative Read Throughput: 0 00:13:50.147 Relative Read Latency: 0 00:13:50.147 Relative Write Throughput: 0 00:13:50.147 Relative Write Latency: 0 00:13:50.147 Idle Power: Not Reported 00:13:50.147 Active Power: Not Reported 00:13:50.147 Non-Operational Permissive Mode: Not Supported 00:13:50.147 00:13:50.147 Health Information 00:13:50.147 ================== 00:13:50.147 Critical Warnings: 00:13:50.147 Available Spare Space: OK 00:13:50.147 Temperature: OK 00:13:50.147 Device Reliability: OK 00:13:50.147 Read Only: No 00:13:50.147 Volatile Memory Backup: OK 00:13:50.147 Current Temperature: 323 Kelvin (50 Celsius) 00:13:50.147 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:50.147 Available Spare: 0% 00:13:50.147 Available Spare Threshold: 0% 00:13:50.147 Life Percentage Used: 0% 00:13:50.147 Data Units Read: 781 00:13:50.147 Data Units Written: 675 00:13:50.147 Host Read Commands: 33375 00:13:50.147 Host Write Commands: 31965 00:13:50.147 Controller Busy Time: 0 minutes 00:13:50.147 Power Cycles: 0 00:13:50.147 Power On Hours: 0 hours 00:13:50.147 Unsafe Shutdowns: 0 00:13:50.147 Unrecoverable Media Errors: 0 00:13:50.147 Lifetime Error Log Entries: 0 00:13:50.147 Warning Temperature Time: 0 minutes 00:13:50.147 Critical Temperature Time: 0 minutes 00:13:50.147 00:13:50.147 Number of Queues 00:13:50.147 ================ 00:13:50.147 Number of I/O Submission Queues: 64 00:13:50.147 Number of I/O Completion Queues: 64 00:13:50.147 00:13:50.147 ZNS Specific Controller Data 00:13:50.147 ============================ 00:13:50.147 Zone Append Size Limit: 0 00:13:50.147 00:13:50.147 00:13:50.147 Active Namespaces 00:13:50.147 ================= 00:13:50.147 Namespace ID:1 00:13:50.147 Error Recovery Timeout: Unlimited 00:13:50.147 Command Set Identifier: NVM (00h) 00:13:50.147 Deallocate: Supported 00:13:50.147 Deallocated/Unwritten Error: Supported 00:13:50.147 Deallocated Read Value: All 0x00 00:13:50.147 Deallocate in Write Zeroes: Not Supported 00:13:50.147 Deallocated Guard Field: 0xFFFF 00:13:50.147 Flush: Supported 00:13:50.147 Reservation: Not Supported 00:13:50.147 Namespace Sharing Capabilities: Multiple Controllers 00:13:50.147 Size (in LBAs): 262144 (1GiB) 00:13:50.147 Capacity (in LBAs): 262144 (1GiB) 00:13:50.147 Utilization (in LBAs): 262144 (1GiB) 00:13:50.147 Thin Provisioning: Not Supported 00:13:50.147 Per-NS Atomic Units: No 00:13:50.147 Maximum Single Source Range Length: 128 00:13:50.147 Maximum Copy Length: 128 00:13:50.147 Maximum Source Range Count: 128 00:13:50.147 NGUID/EUI64 Never Reused: No 00:13:50.147 Namespace Write Protected: No 00:13:50.147 Endurance group ID: 1 00:13:50.147 Number of LBA Formats: 8 00:13:50.147 Current LBA Format: LBA Format #04 00:13:50.147 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:50.147 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:50.147 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:50.147 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:50.147 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:50.147 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:50.147 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:50.147 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:50.147 00:13:50.147 Get Feature FDP: 00:13:50.147 ================ 00:13:50.147 Enabled: Yes 00:13:50.147 FDP configuration index: 0 00:13:50.147 00:13:50.147 FDP configurations log page 00:13:50.147 =========================== 00:13:50.147 Number of FDP configurations: 1 00:13:50.147 Version: 0 00:13:50.147 Size: 112 00:13:50.147 FDP Configuration Descriptor: 0 00:13:50.147 Descriptor Size: 96 00:13:50.147 Reclaim Group Identifier format: 2 00:13:50.147 FDP Volatile Write Cache: Not Present 00:13:50.147 FDP Configuration: Valid 00:13:50.147 Vendor Specific Size: 0 00:13:50.147 Number of Reclaim Groups: 2 00:13:50.147 Number of Recalim Unit Handles: 8 00:13:50.147 Max Placement Identifiers: 128 00:13:50.147 Number of Namespaces Suppprted: 256 00:13:50.147 Reclaim unit Nominal Size: 6000000 bytes 00:13:50.147 Estimated Reclaim Unit Time Limit: Not Reported 00:13:50.147 RUH Desc #000: RUH Type: Initially Isolated 00:13:50.147 RUH Desc #001: RUH Type: Initially Isolated 00:13:50.147 RUH Desc #002: RUH Type: Initially Isolated 00:13:50.147 RUH Desc #003: RUH Type: Initially Isolated 00:13:50.147 RUH Desc #004: RUH Type: Initially Isolated 00:13:50.148 RUH Desc #005: RUH Type: Initially Isolated 00:13:50.148 RUH Desc #006: RUH Type: Initially Isolated 00:13:50.148 RUH Desc #007: RUH Type: Initially Isolated 00:13:50.148 00:13:50.148 FDP reclaim unit handle usage log page 00:13:50.148 ====================================== 00:13:50.148 Number of Reclaim Unit Handles: 8 00:13:50.148 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:13:50.148 RUH Usage Desc #001: RUH Attributes: Unused 00:13:50.148 RUH Usage Desc #002: RUH Attributes: Unused 00:13:50.148 RUH Usage Desc #003: RUH Attributes: Unused 00:13:50.148 RUH Usage Desc #004: RUH Attributes: Unused 00:13:50.148 RUH Usage Desc #005: RUH Attributes: Unused 00:13:50.148 RUH Usage Desc #006: RUH Attributes: Unused 00:13:50.148 RUH Usage Desc #007: RUH Attributes: Unused 00:13:50.148 00:13:50.148 FDP statistics log page 00:13:50.148 ======================= 00:13:50.148 Host bytes with metadata written: 410427392 00:13:50.148 Media bytes with metadata written: 410472448 00:13:50.148 Media bytes erased: 0 00:13:50.148 00:13:50.148 FDP events log page 00:13:50.148 =================== 00:13:50.148 Number of FDP events: 0 00:13:50.148 00:13:50.148 NVM Specific Namespace Data 00:13:50.148 =========================== 00:13:50.148 Logical Block Storage Tag Mask: 0 00:13:50.148 Protection Information Capabilities: 00:13:50.148 16b Guard Protection Information Storage Tag Support: No 00:13:50.148 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:50.148 Storage Tag Check Read Support: No 00:13:50.148 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:50.148 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:50.148 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:50.148 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:50.148 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:50.148 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:50.148 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:50.148 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:50.148 00:13:50.148 real 0m1.838s 00:13:50.148 user 0m0.774s 00:13:50.148 sys 0m0.825s 00:13:50.148 03:42:04 nvme.nvme_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:50.148 03:42:04 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:13:50.148 ************************************ 00:13:50.148 END TEST nvme_identify 00:13:50.148 ************************************ 00:13:50.148 03:42:04 nvme -- common/autotest_common.sh@1142 -- # return 0 00:13:50.148 03:42:04 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:13:50.148 03:42:04 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:50.148 03:42:04 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:50.148 03:42:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:50.148 ************************************ 00:13:50.148 START TEST nvme_perf 00:13:50.148 ************************************ 00:13:50.148 03:42:04 nvme.nvme_perf -- common/autotest_common.sh@1123 -- # nvme_perf 00:13:50.148 03:42:04 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:13:51.526 Initializing NVMe Controllers 00:13:51.526 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:51.526 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:51.526 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:51.526 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:51.526 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:13:51.526 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:13:51.526 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:13:51.526 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:13:51.526 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:13:51.526 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:13:51.526 Initialization complete. Launching workers. 00:13:51.526 ======================================================== 00:13:51.526 Latency(us) 00:13:51.526 Device Information : IOPS MiB/s Average min max 00:13:51.526 PCIE (0000:00:10.0) NSID 1 from core 0: 9176.24 107.53 13991.56 8126.44 45121.54 00:13:51.526 PCIE (0000:00:11.0) NSID 1 from core 0: 9176.24 107.53 13957.07 8265.86 41635.92 00:13:51.526 PCIE (0000:00:13.0) NSID 1 from core 0: 9176.24 107.53 13917.68 8199.08 38974.20 00:13:51.526 PCIE (0000:00:12.0) NSID 1 from core 0: 9176.24 107.53 13876.45 8217.07 35640.01 00:13:51.526 PCIE (0000:00:12.0) NSID 2 from core 0: 9176.24 107.53 13836.81 8217.23 32377.94 00:13:51.526 PCIE (0000:00:12.0) NSID 3 from core 0: 9176.24 107.53 13796.49 8269.05 28752.46 00:13:51.526 ======================================================== 00:13:51.526 Total : 55057.44 645.20 13896.01 8126.44 45121.54 00:13:51.526 00:13:51.526 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:13:51.526 ================================================================================= 00:13:51.527 1.00000% : 8400.524us 00:13:51.527 10.00000% : 8996.305us 00:13:51.527 25.00000% : 9770.822us 00:13:51.527 50.00000% : 12273.105us 00:13:51.527 75.00000% : 17873.455us 00:13:51.527 90.00000% : 19303.331us 00:13:51.527 95.00000% : 19899.113us 00:13:51.527 98.00000% : 23116.335us 00:13:51.527 99.00000% : 36223.535us 00:13:51.527 99.50000% : 43134.604us 00:13:51.527 99.90000% : 44802.793us 00:13:51.527 99.99000% : 45279.418us 00:13:51.527 99.99900% : 45279.418us 00:13:51.527 99.99990% : 45279.418us 00:13:51.527 99.99999% : 45279.418us 00:13:51.527 00:13:51.527 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:13:51.527 ================================================================================= 00:13:51.527 1.00000% : 8460.102us 00:13:51.527 10.00000% : 8996.305us 00:13:51.527 25.00000% : 9711.244us 00:13:51.527 50.00000% : 12213.527us 00:13:51.527 75.00000% : 17873.455us 00:13:51.527 90.00000% : 19184.175us 00:13:51.527 95.00000% : 19660.800us 00:13:51.527 98.00000% : 23116.335us 00:13:51.527 99.00000% : 33840.407us 00:13:51.527 99.50000% : 39798.225us 00:13:51.527 99.90000% : 41466.415us 00:13:51.527 99.99000% : 41704.727us 00:13:51.527 99.99900% : 41704.727us 00:13:51.527 99.99990% : 41704.727us 00:13:51.527 99.99999% : 41704.727us 00:13:51.527 00:13:51.527 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:13:51.527 ================================================================================= 00:13:51.527 1.00000% : 8519.680us 00:13:51.527 10.00000% : 8996.305us 00:13:51.527 25.00000% : 9711.244us 00:13:51.527 50.00000% : 12153.949us 00:13:51.527 75.00000% : 17873.455us 00:13:51.527 90.00000% : 19184.175us 00:13:51.527 95.00000% : 19660.800us 00:13:51.527 98.00000% : 23116.335us 00:13:51.527 99.00000% : 30384.873us 00:13:51.527 99.50000% : 37176.785us 00:13:51.527 99.90000% : 38606.662us 00:13:51.527 99.99000% : 39083.287us 00:13:51.527 99.99900% : 39083.287us 00:13:51.527 99.99990% : 39083.287us 00:13:51.527 99.99999% : 39083.287us 00:13:51.527 00:13:51.527 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:13:51.527 ================================================================================= 00:13:51.527 1.00000% : 8519.680us 00:13:51.527 10.00000% : 9055.884us 00:13:51.527 25.00000% : 9711.244us 00:13:51.527 50.00000% : 12153.949us 00:13:51.527 75.00000% : 17992.611us 00:13:51.527 90.00000% : 19184.175us 00:13:51.527 95.00000% : 19660.800us 00:13:51.527 98.00000% : 23116.335us 00:13:51.527 99.00000% : 26571.869us 00:13:51.527 99.50000% : 33602.095us 00:13:51.527 99.90000% : 35508.596us 00:13:51.527 99.99000% : 35746.909us 00:13:51.527 99.99900% : 35746.909us 00:13:51.527 99.99990% : 35746.909us 00:13:51.527 99.99999% : 35746.909us 00:13:51.527 00:13:51.527 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:13:51.527 ================================================================================= 00:13:51.527 1.00000% : 8519.680us 00:13:51.527 10.00000% : 9055.884us 00:13:51.527 25.00000% : 9711.244us 00:13:51.527 50.00000% : 12153.949us 00:13:51.527 75.00000% : 17992.611us 00:13:51.527 90.00000% : 19184.175us 00:13:51.527 95.00000% : 19660.800us 00:13:51.527 98.00000% : 22520.553us 00:13:51.527 99.00000% : 23831.273us 00:13:51.527 99.50000% : 30384.873us 00:13:51.527 99.90000% : 32172.218us 00:13:51.527 99.99000% : 32410.531us 00:13:51.527 99.99900% : 32410.531us 00:13:51.527 99.99990% : 32410.531us 00:13:51.527 99.99999% : 32410.531us 00:13:51.527 00:13:51.527 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:13:51.527 ================================================================================= 00:13:51.527 1.00000% : 8460.102us 00:13:51.527 10.00000% : 9055.884us 00:13:51.527 25.00000% : 9711.244us 00:13:51.527 50.00000% : 12153.949us 00:13:51.527 75.00000% : 17992.611us 00:13:51.527 90.00000% : 19184.175us 00:13:51.527 95.00000% : 19541.644us 00:13:51.527 98.00000% : 21686.458us 00:13:51.527 99.00000% : 23712.116us 00:13:51.527 99.50000% : 26810.182us 00:13:51.527 99.90000% : 28478.371us 00:13:51.527 99.99000% : 28835.840us 00:13:51.527 99.99900% : 28835.840us 00:13:51.527 99.99990% : 28835.840us 00:13:51.527 99.99999% : 28835.840us 00:13:51.527 00:13:51.527 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:13:51.527 ============================================================================== 00:13:51.527 Range in us Cumulative IO count 00:13:51.527 8102.633 - 8162.211: 0.0217% ( 2) 00:13:51.527 8162.211 - 8221.789: 0.1194% ( 9) 00:13:51.527 8221.789 - 8281.367: 0.3472% ( 21) 00:13:51.527 8281.367 - 8340.945: 0.9006% ( 51) 00:13:51.527 8340.945 - 8400.524: 1.5299% ( 58) 00:13:51.527 8400.524 - 8460.102: 2.2786% ( 69) 00:13:51.527 8460.102 - 8519.680: 3.0273% ( 69) 00:13:51.527 8519.680 - 8579.258: 3.7760% ( 69) 00:13:51.527 8579.258 - 8638.836: 4.6332% ( 79) 00:13:51.527 8638.836 - 8698.415: 5.4688% ( 77) 00:13:51.527 8698.415 - 8757.993: 6.3368% ( 80) 00:13:51.527 8757.993 - 8817.571: 7.3568% ( 94) 00:13:51.527 8817.571 - 8877.149: 8.2682% ( 84) 00:13:51.527 8877.149 - 8936.727: 9.2448% ( 90) 00:13:51.527 8936.727 - 8996.305: 10.2973% ( 97) 00:13:51.527 8996.305 - 9055.884: 11.3824% ( 100) 00:13:51.527 9055.884 - 9115.462: 12.5434% ( 107) 00:13:51.527 9115.462 - 9175.040: 13.8238% ( 118) 00:13:51.527 9175.040 - 9234.618: 15.0716% ( 115) 00:13:51.527 9234.618 - 9294.196: 16.2977% ( 113) 00:13:51.527 9294.196 - 9353.775: 17.5347% ( 114) 00:13:51.527 9353.775 - 9413.353: 18.8368% ( 120) 00:13:51.527 9413.353 - 9472.931: 20.0955% ( 116) 00:13:51.527 9472.931 - 9532.509: 21.3759% ( 118) 00:13:51.527 9532.509 - 9592.087: 22.5694% ( 110) 00:13:51.527 9592.087 - 9651.665: 23.9258% ( 125) 00:13:51.527 9651.665 - 9711.244: 24.9783% ( 97) 00:13:51.527 9711.244 - 9770.822: 25.8789% ( 83) 00:13:51.527 9770.822 - 9830.400: 26.5951% ( 66) 00:13:51.527 9830.400 - 9889.978: 27.2678% ( 62) 00:13:51.527 9889.978 - 9949.556: 27.9622% ( 64) 00:13:51.527 9949.556 - 10009.135: 28.6892% ( 67) 00:13:51.527 10009.135 - 10068.713: 29.3837% ( 64) 00:13:51.527 10068.713 - 10128.291: 29.9696% ( 54) 00:13:51.527 10128.291 - 10187.869: 30.6532% ( 63) 00:13:51.527 10187.869 - 10247.447: 31.2826% ( 58) 00:13:51.527 10247.447 - 10307.025: 31.8142% ( 49) 00:13:51.527 10307.025 - 10366.604: 32.4870% ( 62) 00:13:51.527 10366.604 - 10426.182: 33.1272% ( 59) 00:13:51.527 10426.182 - 10485.760: 33.6372% ( 47) 00:13:51.527 10485.760 - 10545.338: 34.1254% ( 45) 00:13:51.527 10545.338 - 10604.916: 34.6788% ( 51) 00:13:51.527 10604.916 - 10664.495: 35.0911% ( 38) 00:13:51.527 10664.495 - 10724.073: 35.5360% ( 41) 00:13:51.527 10724.073 - 10783.651: 36.0243% ( 45) 00:13:51.527 10783.651 - 10843.229: 36.4692% ( 41) 00:13:51.527 10843.229 - 10902.807: 36.9900% ( 48) 00:13:51.527 10902.807 - 10962.385: 37.4891% ( 46) 00:13:51.527 10962.385 - 11021.964: 38.0642% ( 53) 00:13:51.527 11021.964 - 11081.542: 38.6285% ( 52) 00:13:51.527 11081.542 - 11141.120: 39.2470% ( 57) 00:13:51.527 11141.120 - 11200.698: 39.8872% ( 59) 00:13:51.527 11200.698 - 11260.276: 40.5165% ( 58) 00:13:51.527 11260.276 - 11319.855: 41.2001% ( 63) 00:13:51.527 11319.855 - 11379.433: 41.8077% ( 56) 00:13:51.527 11379.433 - 11439.011: 42.4588% ( 60) 00:13:51.527 11439.011 - 11498.589: 43.0773% ( 57) 00:13:51.527 11498.589 - 11558.167: 43.6957% ( 57) 00:13:51.527 11558.167 - 11617.745: 44.2708% ( 53) 00:13:51.527 11617.745 - 11677.324: 44.8676% ( 55) 00:13:51.527 11677.324 - 11736.902: 45.4210% ( 51) 00:13:51.527 11736.902 - 11796.480: 45.9744% ( 51) 00:13:51.527 11796.480 - 11856.058: 46.4844% ( 47) 00:13:51.527 11856.058 - 11915.636: 47.0269% ( 50) 00:13:51.527 11915.636 - 11975.215: 47.5803% ( 51) 00:13:51.527 11975.215 - 12034.793: 48.1662% ( 54) 00:13:51.527 12034.793 - 12094.371: 48.7630% ( 55) 00:13:51.527 12094.371 - 12153.949: 49.2730% ( 47) 00:13:51.527 12153.949 - 12213.527: 49.7938% ( 48) 00:13:51.527 12213.527 - 12273.105: 50.3147% ( 48) 00:13:51.527 12273.105 - 12332.684: 50.7704% ( 42) 00:13:51.527 12332.684 - 12392.262: 51.2478% ( 44) 00:13:51.527 12392.262 - 12451.840: 51.7144% ( 43) 00:13:51.527 12451.840 - 12511.418: 52.1810% ( 43) 00:13:51.527 12511.418 - 12570.996: 52.5174% ( 31) 00:13:51.527 12570.996 - 12630.575: 52.7995% ( 26) 00:13:51.527 12630.575 - 12690.153: 53.0382% ( 22) 00:13:51.527 12690.153 - 12749.731: 53.2878% ( 23) 00:13:51.527 12749.731 - 12809.309: 53.4831% ( 18) 00:13:51.527 12809.309 - 12868.887: 53.6892% ( 19) 00:13:51.527 12868.887 - 12928.465: 53.8845% ( 18) 00:13:51.527 12928.465 - 12988.044: 54.0690% ( 17) 00:13:51.528 12988.044 - 13047.622: 54.1884% ( 11) 00:13:51.528 13047.622 - 13107.200: 54.3077% ( 11) 00:13:51.528 13107.200 - 13166.778: 54.4379% ( 12) 00:13:51.528 13166.778 - 13226.356: 54.5790% ( 13) 00:13:51.528 13226.356 - 13285.935: 54.6875% ( 10) 00:13:51.528 13285.935 - 13345.513: 54.7852% ( 9) 00:13:51.528 13345.513 - 13405.091: 54.8828% ( 9) 00:13:51.528 13405.091 - 13464.669: 54.9913% ( 10) 00:13:51.528 13464.669 - 13524.247: 55.0890% ( 9) 00:13:51.528 13524.247 - 13583.825: 55.2192% ( 12) 00:13:51.528 13583.825 - 13643.404: 55.3494% ( 12) 00:13:51.528 13643.404 - 13702.982: 55.4579% ( 10) 00:13:51.528 13702.982 - 13762.560: 55.5664% ( 10) 00:13:51.528 13762.560 - 13822.138: 55.6749% ( 10) 00:13:51.528 13822.138 - 13881.716: 55.7943% ( 11) 00:13:51.528 13881.716 - 13941.295: 55.9028% ( 10) 00:13:51.528 13941.295 - 14000.873: 56.0547% ( 14) 00:13:51.528 14000.873 - 14060.451: 56.1740% ( 11) 00:13:51.528 14060.451 - 14120.029: 56.2826% ( 10) 00:13:51.528 14120.029 - 14179.607: 56.4128% ( 12) 00:13:51.528 14179.607 - 14239.185: 56.5538% ( 13) 00:13:51.528 14239.185 - 14298.764: 56.6623% ( 10) 00:13:51.528 14298.764 - 14358.342: 56.8142% ( 14) 00:13:51.528 14358.342 - 14417.920: 56.9336% ( 11) 00:13:51.528 14417.920 - 14477.498: 57.1181% ( 17) 00:13:51.528 14477.498 - 14537.076: 57.2808% ( 15) 00:13:51.528 14537.076 - 14596.655: 57.4436% ( 15) 00:13:51.528 14596.655 - 14656.233: 57.5846% ( 13) 00:13:51.528 14656.233 - 14715.811: 57.7257% ( 13) 00:13:51.528 14715.811 - 14775.389: 57.8993% ( 16) 00:13:51.528 14775.389 - 14834.967: 58.0295% ( 12) 00:13:51.528 14834.967 - 14894.545: 58.2140% ( 17) 00:13:51.528 14894.545 - 14954.124: 58.3442% ( 12) 00:13:51.528 14954.124 - 15013.702: 58.5069% ( 15) 00:13:51.528 15013.702 - 15073.280: 58.6697% ( 15) 00:13:51.528 15073.280 - 15132.858: 58.7891% ( 11) 00:13:51.528 15132.858 - 15192.436: 58.9301% ( 13) 00:13:51.528 15192.436 - 15252.015: 59.0603% ( 12) 00:13:51.528 15252.015 - 15371.171: 59.2990% ( 22) 00:13:51.528 15371.171 - 15490.327: 59.4401% ( 13) 00:13:51.528 15490.327 - 15609.484: 59.6029% ( 15) 00:13:51.528 15609.484 - 15728.640: 59.7656% ( 15) 00:13:51.528 15728.640 - 15847.796: 59.9392% ( 16) 00:13:51.528 15847.796 - 15966.953: 60.1671% ( 21) 00:13:51.528 15966.953 - 16086.109: 60.4167% ( 23) 00:13:51.528 16086.109 - 16205.265: 60.6988% ( 26) 00:13:51.528 16205.265 - 16324.422: 60.9809% ( 26) 00:13:51.528 16324.422 - 16443.578: 61.3173% ( 31) 00:13:51.528 16443.578 - 16562.735: 61.9683% ( 60) 00:13:51.528 16562.735 - 16681.891: 63.0317% ( 98) 00:13:51.528 16681.891 - 16801.047: 64.2036% ( 108) 00:13:51.528 16801.047 - 16920.204: 65.4297% ( 113) 00:13:51.528 16920.204 - 17039.360: 66.7209% ( 119) 00:13:51.528 17039.360 - 17158.516: 68.0230% ( 120) 00:13:51.528 17158.516 - 17277.673: 69.3685% ( 124) 00:13:51.528 17277.673 - 17396.829: 70.6163% ( 115) 00:13:51.528 17396.829 - 17515.985: 71.9618% ( 124) 00:13:51.528 17515.985 - 17635.142: 73.2747% ( 121) 00:13:51.528 17635.142 - 17754.298: 74.5009% ( 113) 00:13:51.528 17754.298 - 17873.455: 75.8572% ( 125) 00:13:51.528 17873.455 - 17992.611: 77.2135% ( 125) 00:13:51.528 17992.611 - 18111.767: 78.4180% ( 111) 00:13:51.528 18111.767 - 18230.924: 79.7092% ( 119) 00:13:51.528 18230.924 - 18350.080: 80.8919% ( 109) 00:13:51.528 18350.080 - 18469.236: 82.1289% ( 114) 00:13:51.528 18469.236 - 18588.393: 83.3984% ( 117) 00:13:51.528 18588.393 - 18707.549: 84.6463% ( 115) 00:13:51.528 18707.549 - 18826.705: 85.7856% ( 105) 00:13:51.528 18826.705 - 18945.862: 87.0660% ( 118) 00:13:51.528 18945.862 - 19065.018: 88.2487% ( 109) 00:13:51.528 19065.018 - 19184.175: 89.3880% ( 105) 00:13:51.528 19184.175 - 19303.331: 90.6141% ( 113) 00:13:51.528 19303.331 - 19422.487: 91.5907% ( 90) 00:13:51.528 19422.487 - 19541.644: 92.6866% ( 101) 00:13:51.528 19541.644 - 19660.800: 93.7066% ( 94) 00:13:51.528 19660.800 - 19779.956: 94.7483% ( 96) 00:13:51.528 19779.956 - 19899.113: 95.7357% ( 91) 00:13:51.528 19899.113 - 20018.269: 96.4518% ( 66) 00:13:51.528 20018.269 - 20137.425: 96.7882% ( 31) 00:13:51.528 20137.425 - 20256.582: 96.9618% ( 16) 00:13:51.528 20256.582 - 20375.738: 97.0486% ( 8) 00:13:51.528 20375.738 - 20494.895: 97.1029% ( 5) 00:13:51.528 20494.895 - 20614.051: 97.1680% ( 6) 00:13:51.528 20614.051 - 20733.207: 97.2005% ( 3) 00:13:51.528 20733.207 - 20852.364: 97.2222% ( 2) 00:13:51.528 21090.676 - 21209.833: 97.2548% ( 3) 00:13:51.528 21209.833 - 21328.989: 97.3090% ( 5) 00:13:51.528 21328.989 - 21448.145: 97.3524% ( 4) 00:13:51.528 21448.145 - 21567.302: 97.3958% ( 4) 00:13:51.528 21567.302 - 21686.458: 97.4718% ( 7) 00:13:51.528 21686.458 - 21805.615: 97.4935% ( 2) 00:13:51.528 21805.615 - 21924.771: 97.5477% ( 5) 00:13:51.528 21924.771 - 22043.927: 97.5911% ( 4) 00:13:51.528 22043.927 - 22163.084: 97.6454% ( 5) 00:13:51.528 22163.084 - 22282.240: 97.7105% ( 6) 00:13:51.528 22282.240 - 22401.396: 97.7322% ( 2) 00:13:51.528 22401.396 - 22520.553: 97.7865% ( 5) 00:13:51.528 22520.553 - 22639.709: 97.8299% ( 4) 00:13:51.528 22639.709 - 22758.865: 97.8733% ( 4) 00:13:51.528 22758.865 - 22878.022: 97.9275% ( 5) 00:13:51.528 22878.022 - 22997.178: 97.9818% ( 5) 00:13:51.528 22997.178 - 23116.335: 98.0143% ( 3) 00:13:51.528 23116.335 - 23235.491: 98.0577% ( 4) 00:13:51.528 23235.491 - 23354.647: 98.1120% ( 5) 00:13:51.528 23354.647 - 23473.804: 98.1662% ( 5) 00:13:51.528 23473.804 - 23592.960: 98.2096% ( 4) 00:13:51.528 23592.960 - 23712.116: 98.2530% ( 4) 00:13:51.528 23712.116 - 23831.273: 98.2964% ( 4) 00:13:51.528 23831.273 - 23950.429: 98.3507% ( 5) 00:13:51.528 23950.429 - 24069.585: 98.3941% ( 4) 00:13:51.528 24069.585 - 24188.742: 98.4484% ( 5) 00:13:51.528 24188.742 - 24307.898: 98.5026% ( 5) 00:13:51.528 24307.898 - 24427.055: 98.5352% ( 3) 00:13:51.528 24427.055 - 24546.211: 98.5569% ( 2) 00:13:51.528 24546.211 - 24665.367: 98.6111% ( 5) 00:13:51.528 34317.033 - 34555.345: 98.6437% ( 3) 00:13:51.528 34555.345 - 34793.658: 98.6979% ( 5) 00:13:51.528 34793.658 - 35031.971: 98.7630% ( 6) 00:13:51.528 35031.971 - 35270.284: 98.8064% ( 4) 00:13:51.528 35270.284 - 35508.596: 98.8715% ( 6) 00:13:51.528 35508.596 - 35746.909: 98.9366% ( 6) 00:13:51.528 35746.909 - 35985.222: 98.9909% ( 5) 00:13:51.528 35985.222 - 36223.535: 99.0560% ( 6) 00:13:51.528 36223.535 - 36461.847: 99.1102% ( 5) 00:13:51.528 36461.847 - 36700.160: 99.1645% ( 5) 00:13:51.528 36700.160 - 36938.473: 99.2188% ( 5) 00:13:51.528 36938.473 - 37176.785: 99.2730% ( 5) 00:13:51.528 37176.785 - 37415.098: 99.3056% ( 3) 00:13:51.528 42181.353 - 42419.665: 99.3598% ( 5) 00:13:51.528 42419.665 - 42657.978: 99.4141% ( 5) 00:13:51.528 42657.978 - 42896.291: 99.4683% ( 5) 00:13:51.528 42896.291 - 43134.604: 99.5226% ( 5) 00:13:51.528 43134.604 - 43372.916: 99.5877% ( 6) 00:13:51.528 43372.916 - 43611.229: 99.6419% ( 5) 00:13:51.528 43611.229 - 43849.542: 99.6962% ( 5) 00:13:51.528 43849.542 - 44087.855: 99.7613% ( 6) 00:13:51.528 44087.855 - 44326.167: 99.8155% ( 5) 00:13:51.528 44326.167 - 44564.480: 99.8806% ( 6) 00:13:51.528 44564.480 - 44802.793: 99.9240% ( 4) 00:13:51.528 44802.793 - 45041.105: 99.9891% ( 6) 00:13:51.528 45041.105 - 45279.418: 100.0000% ( 1) 00:13:51.528 00:13:51.528 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:13:51.528 ============================================================================== 00:13:51.528 Range in us Cumulative IO count 00:13:51.528 8221.789 - 8281.367: 0.0109% ( 1) 00:13:51.528 8281.367 - 8340.945: 0.1302% ( 11) 00:13:51.528 8340.945 - 8400.524: 0.4015% ( 25) 00:13:51.528 8400.524 - 8460.102: 1.0417% ( 59) 00:13:51.528 8460.102 - 8519.680: 1.8121% ( 71) 00:13:51.528 8519.680 - 8579.258: 2.7344% ( 85) 00:13:51.528 8579.258 - 8638.836: 3.6458% ( 84) 00:13:51.528 8638.836 - 8698.415: 4.5573% ( 84) 00:13:51.528 8698.415 - 8757.993: 5.5990% ( 96) 00:13:51.528 8757.993 - 8817.571: 6.7166% ( 103) 00:13:51.528 8817.571 - 8877.149: 7.8016% ( 100) 00:13:51.528 8877.149 - 8936.727: 8.9844% ( 109) 00:13:51.528 8936.727 - 8996.305: 10.1128% ( 104) 00:13:51.528 8996.305 - 9055.884: 11.3281% ( 112) 00:13:51.528 9055.884 - 9115.462: 12.5651% ( 114) 00:13:51.528 9115.462 - 9175.040: 13.9323% ( 126) 00:13:51.528 9175.040 - 9234.618: 15.2452% ( 121) 00:13:51.528 9234.618 - 9294.196: 16.6992% ( 134) 00:13:51.528 9294.196 - 9353.775: 18.1858% ( 137) 00:13:51.528 9353.775 - 9413.353: 19.6506% ( 135) 00:13:51.528 9413.353 - 9472.931: 21.2023% ( 143) 00:13:51.528 9472.931 - 9532.509: 22.5911% ( 128) 00:13:51.528 9532.509 - 9592.087: 23.9258% ( 123) 00:13:51.528 9592.087 - 9651.665: 24.8698% ( 87) 00:13:51.528 9651.665 - 9711.244: 25.6510% ( 72) 00:13:51.528 9711.244 - 9770.822: 26.3780% ( 67) 00:13:51.528 9770.822 - 9830.400: 27.1267% ( 69) 00:13:51.528 9830.400 - 9889.978: 27.7995% ( 62) 00:13:51.528 9889.978 - 9949.556: 28.5373% ( 68) 00:13:51.528 9949.556 - 10009.135: 29.1667% ( 58) 00:13:51.528 10009.135 - 10068.713: 29.7635% ( 55) 00:13:51.528 10068.713 - 10128.291: 30.3494% ( 54) 00:13:51.528 10128.291 - 10187.869: 30.9462% ( 55) 00:13:51.528 10187.869 - 10247.447: 31.5647% ( 57) 00:13:51.529 10247.447 - 10307.025: 32.1506% ( 54) 00:13:51.529 10307.025 - 10366.604: 32.6931% ( 50) 00:13:51.529 10366.604 - 10426.182: 33.2357% ( 50) 00:13:51.529 10426.182 - 10485.760: 33.7457% ( 47) 00:13:51.529 10485.760 - 10545.338: 34.1797% ( 40) 00:13:51.529 10545.338 - 10604.916: 34.6137% ( 40) 00:13:51.529 10604.916 - 10664.495: 35.0477% ( 40) 00:13:51.529 10664.495 - 10724.073: 35.4926% ( 41) 00:13:51.529 10724.073 - 10783.651: 35.9484% ( 42) 00:13:51.529 10783.651 - 10843.229: 36.4692% ( 48) 00:13:51.529 10843.229 - 10902.807: 37.0334% ( 52) 00:13:51.529 10902.807 - 10962.385: 37.5977% ( 52) 00:13:51.529 10962.385 - 11021.964: 38.1619% ( 52) 00:13:51.529 11021.964 - 11081.542: 38.7261% ( 52) 00:13:51.529 11081.542 - 11141.120: 39.2687% ( 50) 00:13:51.529 11141.120 - 11200.698: 39.8546% ( 54) 00:13:51.529 11200.698 - 11260.276: 40.4948% ( 59) 00:13:51.529 11260.276 - 11319.855: 41.1458% ( 60) 00:13:51.529 11319.855 - 11379.433: 41.8620% ( 66) 00:13:51.529 11379.433 - 11439.011: 42.4913% ( 58) 00:13:51.529 11439.011 - 11498.589: 43.1315% ( 59) 00:13:51.529 11498.589 - 11558.167: 43.8043% ( 62) 00:13:51.529 11558.167 - 11617.745: 44.4661% ( 61) 00:13:51.529 11617.745 - 11677.324: 45.1172% ( 60) 00:13:51.529 11677.324 - 11736.902: 45.7248% ( 56) 00:13:51.529 11736.902 - 11796.480: 46.3976% ( 62) 00:13:51.529 11796.480 - 11856.058: 47.0812% ( 63) 00:13:51.529 11856.058 - 11915.636: 47.7322% ( 60) 00:13:51.529 11915.636 - 11975.215: 48.3398% ( 56) 00:13:51.529 11975.215 - 12034.793: 48.8715% ( 49) 00:13:51.529 12034.793 - 12094.371: 49.3815% ( 47) 00:13:51.529 12094.371 - 12153.949: 49.8589% ( 44) 00:13:51.529 12153.949 - 12213.527: 50.3472% ( 45) 00:13:51.529 12213.527 - 12273.105: 50.7812% ( 40) 00:13:51.529 12273.105 - 12332.684: 51.2695% ( 45) 00:13:51.529 12332.684 - 12392.262: 51.6493% ( 35) 00:13:51.529 12392.262 - 12451.840: 52.0291% ( 35) 00:13:51.529 12451.840 - 12511.418: 52.3112% ( 26) 00:13:51.529 12511.418 - 12570.996: 52.6150% ( 28) 00:13:51.529 12570.996 - 12630.575: 52.8754% ( 24) 00:13:51.529 12630.575 - 12690.153: 53.0382% ( 15) 00:13:51.529 12690.153 - 12749.731: 53.1901% ( 14) 00:13:51.529 12749.731 - 12809.309: 53.3312% ( 13) 00:13:51.529 12809.309 - 12868.887: 53.4614% ( 12) 00:13:51.529 12868.887 - 12928.465: 53.5807% ( 11) 00:13:51.529 12928.465 - 12988.044: 53.7218% ( 13) 00:13:51.529 12988.044 - 13047.622: 53.8411% ( 11) 00:13:51.529 13047.622 - 13107.200: 53.9497% ( 10) 00:13:51.529 13107.200 - 13166.778: 54.0799% ( 12) 00:13:51.529 13166.778 - 13226.356: 54.1884% ( 10) 00:13:51.529 13226.356 - 13285.935: 54.3077% ( 11) 00:13:51.529 13285.935 - 13345.513: 54.4271% ( 11) 00:13:51.529 13345.513 - 13405.091: 54.5356% ( 10) 00:13:51.529 13405.091 - 13464.669: 54.6224% ( 8) 00:13:51.529 13464.669 - 13524.247: 54.7309% ( 10) 00:13:51.529 13524.247 - 13583.825: 54.8069% ( 7) 00:13:51.529 13583.825 - 13643.404: 54.8937% ( 8) 00:13:51.529 13643.404 - 13702.982: 55.0130% ( 11) 00:13:51.529 13702.982 - 13762.560: 55.1215% ( 10) 00:13:51.529 13762.560 - 13822.138: 55.2192% ( 9) 00:13:51.529 13822.138 - 13881.716: 55.3385% ( 11) 00:13:51.529 13881.716 - 13941.295: 55.4905% ( 14) 00:13:51.529 13941.295 - 14000.873: 55.6315% ( 13) 00:13:51.529 14000.873 - 14060.451: 55.8485% ( 20) 00:13:51.529 14060.451 - 14120.029: 56.0330% ( 17) 00:13:51.529 14120.029 - 14179.607: 56.2066% ( 16) 00:13:51.529 14179.607 - 14239.185: 56.4128% ( 19) 00:13:51.529 14239.185 - 14298.764: 56.5864% ( 16) 00:13:51.529 14298.764 - 14358.342: 56.7274% ( 13) 00:13:51.529 14358.342 - 14417.920: 56.8902% ( 15) 00:13:51.529 14417.920 - 14477.498: 57.0530% ( 15) 00:13:51.529 14477.498 - 14537.076: 57.2266% ( 16) 00:13:51.529 14537.076 - 14596.655: 57.3893% ( 15) 00:13:51.529 14596.655 - 14656.233: 57.5955% ( 19) 00:13:51.529 14656.233 - 14715.811: 57.7799% ( 17) 00:13:51.529 14715.811 - 14775.389: 57.9427% ( 15) 00:13:51.529 14775.389 - 14834.967: 58.1380% ( 18) 00:13:51.529 14834.967 - 14894.545: 58.3008% ( 15) 00:13:51.529 14894.545 - 14954.124: 58.4744% ( 16) 00:13:51.529 14954.124 - 15013.702: 58.6155% ( 13) 00:13:51.529 15013.702 - 15073.280: 58.7674% ( 14) 00:13:51.529 15073.280 - 15132.858: 58.8542% ( 8) 00:13:51.529 15132.858 - 15192.436: 58.9410% ( 8) 00:13:51.529 15192.436 - 15252.015: 59.0061% ( 6) 00:13:51.529 15252.015 - 15371.171: 59.1580% ( 14) 00:13:51.529 15371.171 - 15490.327: 59.3316% ( 16) 00:13:51.529 15490.327 - 15609.484: 59.5052% ( 16) 00:13:51.529 15609.484 - 15728.640: 59.5920% ( 8) 00:13:51.529 15728.640 - 15847.796: 59.6897% ( 9) 00:13:51.529 15847.796 - 15966.953: 59.8199% ( 12) 00:13:51.529 15966.953 - 16086.109: 59.9609% ( 13) 00:13:51.529 16086.109 - 16205.265: 60.1454% ( 17) 00:13:51.529 16205.265 - 16324.422: 60.3841% ( 22) 00:13:51.529 16324.422 - 16443.578: 60.6445% ( 24) 00:13:51.529 16443.578 - 16562.735: 60.9809% ( 31) 00:13:51.529 16562.735 - 16681.891: 61.3607% ( 35) 00:13:51.529 16681.891 - 16801.047: 61.9032% ( 50) 00:13:51.529 16801.047 - 16920.204: 62.9015% ( 92) 00:13:51.529 16920.204 - 17039.360: 64.2687% ( 126) 00:13:51.529 17039.360 - 17158.516: 65.8095% ( 142) 00:13:51.529 17158.516 - 17277.673: 67.2418% ( 132) 00:13:51.529 17277.673 - 17396.829: 68.7500% ( 139) 00:13:51.529 17396.829 - 17515.985: 70.3451% ( 147) 00:13:51.529 17515.985 - 17635.142: 71.9510% ( 148) 00:13:51.529 17635.142 - 17754.298: 73.4592% ( 139) 00:13:51.529 17754.298 - 17873.455: 75.0217% ( 144) 00:13:51.529 17873.455 - 17992.611: 76.6493% ( 150) 00:13:51.529 17992.611 - 18111.767: 78.0924% ( 133) 00:13:51.529 18111.767 - 18230.924: 79.5898% ( 138) 00:13:51.529 18230.924 - 18350.080: 81.1415% ( 143) 00:13:51.529 18350.080 - 18469.236: 82.6389% ( 138) 00:13:51.529 18469.236 - 18588.393: 84.1146% ( 136) 00:13:51.529 18588.393 - 18707.549: 85.6011% ( 137) 00:13:51.529 18707.549 - 18826.705: 87.0226% ( 131) 00:13:51.529 18826.705 - 18945.862: 88.4440% ( 131) 00:13:51.529 18945.862 - 19065.018: 89.7895% ( 124) 00:13:51.529 19065.018 - 19184.175: 91.0916% ( 120) 00:13:51.529 19184.175 - 19303.331: 92.3937% ( 120) 00:13:51.529 19303.331 - 19422.487: 93.6306% ( 114) 00:13:51.529 19422.487 - 19541.644: 94.7700% ( 105) 00:13:51.529 19541.644 - 19660.800: 95.8876% ( 103) 00:13:51.529 19660.800 - 19779.956: 96.5169% ( 58) 00:13:51.529 19779.956 - 19899.113: 96.7990% ( 26) 00:13:51.529 19899.113 - 20018.269: 96.9510% ( 14) 00:13:51.529 20018.269 - 20137.425: 97.0595% ( 10) 00:13:51.529 20137.425 - 20256.582: 97.1246% ( 6) 00:13:51.529 20256.582 - 20375.738: 97.1897% ( 6) 00:13:51.529 20375.738 - 20494.895: 97.2222% ( 3) 00:13:51.529 21448.145 - 21567.302: 97.2873% ( 6) 00:13:51.529 21567.302 - 21686.458: 97.3416% ( 5) 00:13:51.529 21686.458 - 21805.615: 97.3958% ( 5) 00:13:51.529 21805.615 - 21924.771: 97.4609% ( 6) 00:13:51.529 21924.771 - 22043.927: 97.5152% ( 5) 00:13:51.529 22043.927 - 22163.084: 97.5694% ( 5) 00:13:51.529 22163.084 - 22282.240: 97.6237% ( 5) 00:13:51.529 22282.240 - 22401.396: 97.6888% ( 6) 00:13:51.529 22401.396 - 22520.553: 97.7539% ( 6) 00:13:51.529 22520.553 - 22639.709: 97.8082% ( 5) 00:13:51.529 22639.709 - 22758.865: 97.8624% ( 5) 00:13:51.529 22758.865 - 22878.022: 97.9275% ( 6) 00:13:51.529 22878.022 - 22997.178: 97.9926% ( 6) 00:13:51.529 22997.178 - 23116.335: 98.0469% ( 5) 00:13:51.529 23116.335 - 23235.491: 98.1011% ( 5) 00:13:51.529 23235.491 - 23354.647: 98.1554% ( 5) 00:13:51.529 23354.647 - 23473.804: 98.2205% ( 6) 00:13:51.529 23473.804 - 23592.960: 98.2747% ( 5) 00:13:51.529 23592.960 - 23712.116: 98.3290% ( 5) 00:13:51.529 23712.116 - 23831.273: 98.3941% ( 6) 00:13:51.529 23831.273 - 23950.429: 98.4375% ( 4) 00:13:51.529 23950.429 - 24069.585: 98.5026% ( 6) 00:13:51.529 24069.585 - 24188.742: 98.5677% ( 6) 00:13:51.529 24188.742 - 24307.898: 98.6003% ( 3) 00:13:51.529 24307.898 - 24427.055: 98.6111% ( 1) 00:13:51.529 31933.905 - 32172.218: 98.6328% ( 2) 00:13:51.529 32172.218 - 32410.531: 98.6979% ( 6) 00:13:51.529 32410.531 - 32648.844: 98.7630% ( 6) 00:13:51.529 32648.844 - 32887.156: 98.8173% ( 5) 00:13:51.529 32887.156 - 33125.469: 98.8715% ( 5) 00:13:51.529 33125.469 - 33363.782: 98.9366% ( 6) 00:13:51.529 33363.782 - 33602.095: 98.9909% ( 5) 00:13:51.529 33602.095 - 33840.407: 99.0560% ( 6) 00:13:51.529 33840.407 - 34078.720: 99.1211% ( 6) 00:13:51.529 34078.720 - 34317.033: 99.1753% ( 5) 00:13:51.529 34317.033 - 34555.345: 99.2296% ( 5) 00:13:51.529 34555.345 - 34793.658: 99.2947% ( 6) 00:13:51.529 34793.658 - 35031.971: 99.3056% ( 1) 00:13:51.529 38844.975 - 39083.287: 99.3490% ( 4) 00:13:51.529 39083.287 - 39321.600: 99.4141% ( 6) 00:13:51.529 39321.600 - 39559.913: 99.4792% ( 6) 00:13:51.529 39559.913 - 39798.225: 99.5226% ( 4) 00:13:51.529 39798.225 - 40036.538: 99.5877% ( 6) 00:13:51.529 40036.538 - 40274.851: 99.6419% ( 5) 00:13:51.529 40274.851 - 40513.164: 99.7070% ( 6) 00:13:51.529 40513.164 - 40751.476: 99.7721% ( 6) 00:13:51.529 40751.476 - 40989.789: 99.8264% ( 5) 00:13:51.529 40989.789 - 41228.102: 99.8806% ( 5) 00:13:51.529 41228.102 - 41466.415: 99.9457% ( 6) 00:13:51.529 41466.415 - 41704.727: 100.0000% ( 5) 00:13:51.529 00:13:51.529 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:13:51.530 ============================================================================== 00:13:51.530 Range in us Cumulative IO count 00:13:51.530 8162.211 - 8221.789: 0.0109% ( 1) 00:13:51.530 8221.789 - 8281.367: 0.0651% ( 5) 00:13:51.530 8281.367 - 8340.945: 0.1845% ( 11) 00:13:51.530 8340.945 - 8400.524: 0.4449% ( 24) 00:13:51.530 8400.524 - 8460.102: 0.9549% ( 47) 00:13:51.530 8460.102 - 8519.680: 1.7036% ( 69) 00:13:51.530 8519.680 - 8579.258: 2.5174% ( 75) 00:13:51.530 8579.258 - 8638.836: 3.4180% ( 83) 00:13:51.530 8638.836 - 8698.415: 4.4379% ( 94) 00:13:51.530 8698.415 - 8757.993: 5.4905% ( 97) 00:13:51.530 8757.993 - 8817.571: 6.6949% ( 111) 00:13:51.530 8817.571 - 8877.149: 7.8776% ( 109) 00:13:51.530 8877.149 - 8936.727: 9.0603% ( 109) 00:13:51.530 8936.727 - 8996.305: 10.2539% ( 110) 00:13:51.530 8996.305 - 9055.884: 11.5126% ( 116) 00:13:51.530 9055.884 - 9115.462: 12.8581% ( 124) 00:13:51.530 9115.462 - 9175.040: 14.1493% ( 119) 00:13:51.530 9175.040 - 9234.618: 15.5273% ( 127) 00:13:51.530 9234.618 - 9294.196: 16.8294% ( 120) 00:13:51.530 9294.196 - 9353.775: 18.1424% ( 121) 00:13:51.530 9353.775 - 9413.353: 19.5964% ( 134) 00:13:51.530 9413.353 - 9472.931: 20.9093% ( 121) 00:13:51.530 9472.931 - 9532.509: 22.2331% ( 122) 00:13:51.530 9532.509 - 9592.087: 23.5135% ( 118) 00:13:51.530 9592.087 - 9651.665: 24.5334% ( 94) 00:13:51.530 9651.665 - 9711.244: 25.4340% ( 83) 00:13:51.530 9711.244 - 9770.822: 26.2261% ( 73) 00:13:51.530 9770.822 - 9830.400: 26.9748% ( 69) 00:13:51.530 9830.400 - 9889.978: 27.5608% ( 54) 00:13:51.530 9889.978 - 9949.556: 28.0490% ( 45) 00:13:51.530 9949.556 - 10009.135: 28.6133% ( 52) 00:13:51.530 10009.135 - 10068.713: 29.2535% ( 59) 00:13:51.530 10068.713 - 10128.291: 29.8828% ( 58) 00:13:51.530 10128.291 - 10187.869: 30.4688% ( 54) 00:13:51.530 10187.869 - 10247.447: 31.0004% ( 49) 00:13:51.530 10247.447 - 10307.025: 31.4996% ( 46) 00:13:51.530 10307.025 - 10366.604: 31.9770% ( 44) 00:13:51.530 10366.604 - 10426.182: 32.4761% ( 46) 00:13:51.530 10426.182 - 10485.760: 32.9970% ( 48) 00:13:51.530 10485.760 - 10545.338: 33.5503% ( 51) 00:13:51.530 10545.338 - 10604.916: 34.0603% ( 47) 00:13:51.530 10604.916 - 10664.495: 34.5812% ( 48) 00:13:51.530 10664.495 - 10724.073: 35.1020% ( 48) 00:13:51.530 10724.073 - 10783.651: 35.5686% ( 43) 00:13:51.530 10783.651 - 10843.229: 36.0026% ( 40) 00:13:51.530 10843.229 - 10902.807: 36.4583% ( 42) 00:13:51.530 10902.807 - 10962.385: 36.9683% ( 47) 00:13:51.530 10962.385 - 11021.964: 37.5868% ( 57) 00:13:51.530 11021.964 - 11081.542: 38.1944% ( 56) 00:13:51.530 11081.542 - 11141.120: 38.8346% ( 59) 00:13:51.530 11141.120 - 11200.698: 39.4640% ( 58) 00:13:51.530 11200.698 - 11260.276: 40.1367% ( 62) 00:13:51.530 11260.276 - 11319.855: 40.8095% ( 62) 00:13:51.530 11319.855 - 11379.433: 41.5473% ( 68) 00:13:51.530 11379.433 - 11439.011: 42.1984% ( 60) 00:13:51.530 11439.011 - 11498.589: 42.9253% ( 67) 00:13:51.530 11498.589 - 11558.167: 43.6849% ( 70) 00:13:51.530 11558.167 - 11617.745: 44.4119% ( 67) 00:13:51.530 11617.745 - 11677.324: 45.1714% ( 70) 00:13:51.530 11677.324 - 11736.902: 45.8442% ( 62) 00:13:51.530 11736.902 - 11796.480: 46.6146% ( 71) 00:13:51.530 11796.480 - 11856.058: 47.2548% ( 59) 00:13:51.530 11856.058 - 11915.636: 47.8190% ( 52) 00:13:51.530 11915.636 - 11975.215: 48.4266% ( 56) 00:13:51.530 11975.215 - 12034.793: 49.0451% ( 57) 00:13:51.530 12034.793 - 12094.371: 49.6094% ( 52) 00:13:51.530 12094.371 - 12153.949: 50.1302% ( 48) 00:13:51.530 12153.949 - 12213.527: 50.5859% ( 42) 00:13:51.530 12213.527 - 12273.105: 51.0200% ( 40) 00:13:51.530 12273.105 - 12332.684: 51.4648% ( 41) 00:13:51.530 12332.684 - 12392.262: 51.7904% ( 30) 00:13:51.530 12392.262 - 12451.840: 52.0508% ( 24) 00:13:51.530 12451.840 - 12511.418: 52.3003% ( 23) 00:13:51.530 12511.418 - 12570.996: 52.5065% ( 19) 00:13:51.530 12570.996 - 12630.575: 52.7344% ( 21) 00:13:51.530 12630.575 - 12690.153: 52.9514% ( 20) 00:13:51.530 12690.153 - 12749.731: 53.1793% ( 21) 00:13:51.530 12749.731 - 12809.309: 53.3963% ( 20) 00:13:51.530 12809.309 - 12868.887: 53.6133% ( 20) 00:13:51.530 12868.887 - 12928.465: 53.7869% ( 16) 00:13:51.530 12928.465 - 12988.044: 53.9062% ( 11) 00:13:51.530 12988.044 - 13047.622: 54.0582% ( 14) 00:13:51.530 13047.622 - 13107.200: 54.1775% ( 11) 00:13:51.530 13107.200 - 13166.778: 54.2969% ( 11) 00:13:51.530 13166.778 - 13226.356: 54.3945% ( 9) 00:13:51.530 13226.356 - 13285.935: 54.5139% ( 11) 00:13:51.530 13285.935 - 13345.513: 54.6549% ( 13) 00:13:51.530 13345.513 - 13405.091: 54.8069% ( 14) 00:13:51.530 13405.091 - 13464.669: 54.9588% ( 14) 00:13:51.530 13464.669 - 13524.247: 55.1215% ( 15) 00:13:51.530 13524.247 - 13583.825: 55.2409% ( 11) 00:13:51.530 13583.825 - 13643.404: 55.3602% ( 11) 00:13:51.530 13643.404 - 13702.982: 55.5013% ( 13) 00:13:51.530 13702.982 - 13762.560: 55.5881% ( 8) 00:13:51.530 13762.560 - 13822.138: 55.6749% ( 8) 00:13:51.530 13822.138 - 13881.716: 55.8051% ( 12) 00:13:51.530 13881.716 - 13941.295: 55.9679% ( 15) 00:13:51.530 13941.295 - 14000.873: 56.1089% ( 13) 00:13:51.530 14000.873 - 14060.451: 56.3043% ( 18) 00:13:51.530 14060.451 - 14120.029: 56.4562% ( 14) 00:13:51.530 14120.029 - 14179.607: 56.6189% ( 15) 00:13:51.530 14179.607 - 14239.185: 56.7708% ( 14) 00:13:51.530 14239.185 - 14298.764: 56.9444% ( 16) 00:13:51.530 14298.764 - 14358.342: 57.1072% ( 15) 00:13:51.530 14358.342 - 14417.920: 57.2591% ( 14) 00:13:51.530 14417.920 - 14477.498: 57.4110% ( 14) 00:13:51.530 14477.498 - 14537.076: 57.5412% ( 12) 00:13:51.530 14537.076 - 14596.655: 57.6823% ( 13) 00:13:51.530 14596.655 - 14656.233: 57.8342% ( 14) 00:13:51.530 14656.233 - 14715.811: 57.9970% ( 15) 00:13:51.530 14715.811 - 14775.389: 58.1489% ( 14) 00:13:51.530 14775.389 - 14834.967: 58.2899% ( 13) 00:13:51.530 14834.967 - 14894.545: 58.4310% ( 13) 00:13:51.530 14894.545 - 14954.124: 58.5503% ( 11) 00:13:51.530 14954.124 - 15013.702: 58.6589% ( 10) 00:13:51.530 15013.702 - 15073.280: 58.7348% ( 7) 00:13:51.530 15073.280 - 15132.858: 58.8216% ( 8) 00:13:51.530 15132.858 - 15192.436: 58.8867% ( 6) 00:13:51.530 15192.436 - 15252.015: 58.9410% ( 5) 00:13:51.530 15252.015 - 15371.171: 59.0712% ( 12) 00:13:51.530 15371.171 - 15490.327: 59.2122% ( 13) 00:13:51.530 15490.327 - 15609.484: 59.3859% ( 16) 00:13:51.530 15609.484 - 15728.640: 59.5595% ( 16) 00:13:51.530 15728.640 - 15847.796: 59.7005% ( 13) 00:13:51.530 15847.796 - 15966.953: 59.7873% ( 8) 00:13:51.530 15966.953 - 16086.109: 59.9175% ( 12) 00:13:51.530 16086.109 - 16205.265: 60.1128% ( 18) 00:13:51.530 16205.265 - 16324.422: 60.3950% ( 26) 00:13:51.530 16324.422 - 16443.578: 60.7096% ( 29) 00:13:51.530 16443.578 - 16562.735: 61.0243% ( 29) 00:13:51.530 16562.735 - 16681.891: 61.4041% ( 35) 00:13:51.530 16681.891 - 16801.047: 61.9032% ( 46) 00:13:51.530 16801.047 - 16920.204: 62.9123% ( 93) 00:13:51.530 16920.204 - 17039.360: 64.2795% ( 126) 00:13:51.530 17039.360 - 17158.516: 65.8312% ( 143) 00:13:51.530 17158.516 - 17277.673: 67.3611% ( 141) 00:13:51.530 17277.673 - 17396.829: 68.8911% ( 141) 00:13:51.530 17396.829 - 17515.985: 70.4427% ( 143) 00:13:51.530 17515.985 - 17635.142: 71.8967% ( 134) 00:13:51.530 17635.142 - 17754.298: 73.4809% ( 146) 00:13:51.530 17754.298 - 17873.455: 75.0109% ( 141) 00:13:51.530 17873.455 - 17992.611: 76.5082% ( 138) 00:13:51.530 17992.611 - 18111.767: 78.0816% ( 145) 00:13:51.530 18111.767 - 18230.924: 79.5573% ( 136) 00:13:51.530 18230.924 - 18350.080: 81.0655% ( 139) 00:13:51.530 18350.080 - 18469.236: 82.5629% ( 138) 00:13:51.530 18469.236 - 18588.393: 83.9952% ( 132) 00:13:51.530 18588.393 - 18707.549: 85.5903% ( 147) 00:13:51.530 18707.549 - 18826.705: 87.0226% ( 132) 00:13:51.530 18826.705 - 18945.862: 88.3898% ( 126) 00:13:51.530 18945.862 - 19065.018: 89.7135% ( 122) 00:13:51.530 19065.018 - 19184.175: 91.0482% ( 123) 00:13:51.530 19184.175 - 19303.331: 92.3286% ( 118) 00:13:51.530 19303.331 - 19422.487: 93.5764% ( 115) 00:13:51.530 19422.487 - 19541.644: 94.7808% ( 111) 00:13:51.530 19541.644 - 19660.800: 95.8333% ( 97) 00:13:51.530 19660.800 - 19779.956: 96.5169% ( 63) 00:13:51.530 19779.956 - 19899.113: 96.8207% ( 28) 00:13:51.530 19899.113 - 20018.269: 96.9618% ( 13) 00:13:51.530 20018.269 - 20137.425: 97.0812% ( 11) 00:13:51.530 20137.425 - 20256.582: 97.1680% ( 8) 00:13:51.530 20256.582 - 20375.738: 97.1897% ( 2) 00:13:51.530 20375.738 - 20494.895: 97.2222% ( 3) 00:13:51.530 21328.989 - 21448.145: 97.2331% ( 1) 00:13:51.530 21448.145 - 21567.302: 97.2765% ( 4) 00:13:51.530 21567.302 - 21686.458: 97.3307% ( 5) 00:13:51.530 21686.458 - 21805.615: 97.4175% ( 8) 00:13:51.530 21805.615 - 21924.771: 97.4718% ( 5) 00:13:51.530 21924.771 - 22043.927: 97.5260% ( 5) 00:13:51.530 22043.927 - 22163.084: 97.5803% ( 5) 00:13:51.530 22163.084 - 22282.240: 97.6454% ( 6) 00:13:51.530 22282.240 - 22401.396: 97.6997% ( 5) 00:13:51.530 22401.396 - 22520.553: 97.7539% ( 5) 00:13:51.530 22520.553 - 22639.709: 97.8082% ( 5) 00:13:51.530 22639.709 - 22758.865: 97.8624% ( 5) 00:13:51.530 22758.865 - 22878.022: 97.9167% ( 5) 00:13:51.530 22878.022 - 22997.178: 97.9709% ( 5) 00:13:51.530 22997.178 - 23116.335: 98.0360% ( 6) 00:13:51.530 23116.335 - 23235.491: 98.1011% ( 6) 00:13:51.530 23235.491 - 23354.647: 98.1445% ( 4) 00:13:51.530 23354.647 - 23473.804: 98.2096% ( 6) 00:13:51.531 23473.804 - 23592.960: 98.2530% ( 4) 00:13:51.531 23592.960 - 23712.116: 98.3181% ( 6) 00:13:51.531 23712.116 - 23831.273: 98.3832% ( 6) 00:13:51.531 23831.273 - 23950.429: 98.4266% ( 4) 00:13:51.531 23950.429 - 24069.585: 98.4918% ( 6) 00:13:51.531 24069.585 - 24188.742: 98.5569% ( 6) 00:13:51.531 24188.742 - 24307.898: 98.6003% ( 4) 00:13:51.531 24307.898 - 24427.055: 98.6111% ( 1) 00:13:51.531 28597.527 - 28716.684: 98.6220% ( 1) 00:13:51.531 28716.684 - 28835.840: 98.6545% ( 3) 00:13:51.531 28835.840 - 28954.996: 98.6654% ( 1) 00:13:51.531 28954.996 - 29074.153: 98.6979% ( 3) 00:13:51.531 29074.153 - 29193.309: 98.7305% ( 3) 00:13:51.531 29193.309 - 29312.465: 98.7522% ( 2) 00:13:51.531 29312.465 - 29431.622: 98.7847% ( 3) 00:13:51.531 29431.622 - 29550.778: 98.8173% ( 3) 00:13:51.531 29550.778 - 29669.935: 98.8498% ( 3) 00:13:51.531 29669.935 - 29789.091: 98.8824% ( 3) 00:13:51.531 29789.091 - 29908.247: 98.9041% ( 2) 00:13:51.531 29908.247 - 30027.404: 98.9366% ( 3) 00:13:51.531 30027.404 - 30146.560: 98.9583% ( 2) 00:13:51.531 30146.560 - 30265.716: 98.9909% ( 3) 00:13:51.531 30265.716 - 30384.873: 99.0234% ( 3) 00:13:51.531 30384.873 - 30504.029: 99.0451% ( 2) 00:13:51.531 30504.029 - 30742.342: 99.1102% ( 6) 00:13:51.531 30742.342 - 30980.655: 99.1753% ( 6) 00:13:51.531 30980.655 - 31218.967: 99.2296% ( 5) 00:13:51.531 31218.967 - 31457.280: 99.2947% ( 6) 00:13:51.531 31457.280 - 31695.593: 99.3056% ( 1) 00:13:51.531 35985.222 - 36223.535: 99.3164% ( 1) 00:13:51.531 36223.535 - 36461.847: 99.3707% ( 5) 00:13:51.531 36461.847 - 36700.160: 99.4249% ( 5) 00:13:51.531 36700.160 - 36938.473: 99.4792% ( 5) 00:13:51.531 36938.473 - 37176.785: 99.5334% ( 5) 00:13:51.531 37176.785 - 37415.098: 99.5985% ( 6) 00:13:51.531 37415.098 - 37653.411: 99.6528% ( 5) 00:13:51.531 37653.411 - 37891.724: 99.7179% ( 6) 00:13:51.531 37891.724 - 38130.036: 99.7721% ( 5) 00:13:51.531 38130.036 - 38368.349: 99.8372% ( 6) 00:13:51.531 38368.349 - 38606.662: 99.9023% ( 6) 00:13:51.531 38606.662 - 38844.975: 99.9566% ( 5) 00:13:51.531 38844.975 - 39083.287: 100.0000% ( 4) 00:13:51.531 00:13:51.531 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:13:51.531 ============================================================================== 00:13:51.531 Range in us Cumulative IO count 00:13:51.531 8162.211 - 8221.789: 0.0109% ( 1) 00:13:51.531 8221.789 - 8281.367: 0.0651% ( 5) 00:13:51.531 8281.367 - 8340.945: 0.1628% ( 9) 00:13:51.531 8340.945 - 8400.524: 0.4340% ( 25) 00:13:51.531 8400.524 - 8460.102: 0.9549% ( 48) 00:13:51.531 8460.102 - 8519.680: 1.6385% ( 63) 00:13:51.531 8519.680 - 8579.258: 2.4306% ( 73) 00:13:51.531 8579.258 - 8638.836: 3.2444% ( 75) 00:13:51.531 8638.836 - 8698.415: 4.1775% ( 86) 00:13:51.531 8698.415 - 8757.993: 5.2409% ( 98) 00:13:51.531 8757.993 - 8817.571: 6.2826% ( 96) 00:13:51.531 8817.571 - 8877.149: 7.4002% ( 103) 00:13:51.531 8877.149 - 8936.727: 8.5395% ( 105) 00:13:51.531 8936.727 - 8996.305: 9.7765% ( 114) 00:13:51.531 8996.305 - 9055.884: 11.1220% ( 124) 00:13:51.531 9055.884 - 9115.462: 12.4023% ( 118) 00:13:51.531 9115.462 - 9175.040: 13.7370% ( 123) 00:13:51.531 9175.040 - 9234.618: 15.1693% ( 132) 00:13:51.531 9234.618 - 9294.196: 16.4714% ( 120) 00:13:51.531 9294.196 - 9353.775: 17.8060% ( 123) 00:13:51.531 9353.775 - 9413.353: 19.2057% ( 129) 00:13:51.531 9413.353 - 9472.931: 20.6489% ( 133) 00:13:51.531 9472.931 - 9532.509: 22.0052% ( 125) 00:13:51.531 9532.509 - 9592.087: 23.2530% ( 115) 00:13:51.531 9592.087 - 9651.665: 24.3490% ( 101) 00:13:51.531 9651.665 - 9711.244: 25.3038% ( 88) 00:13:51.531 9711.244 - 9770.822: 26.0634% ( 70) 00:13:51.531 9770.822 - 9830.400: 26.8555% ( 73) 00:13:51.531 9830.400 - 9889.978: 27.5716% ( 66) 00:13:51.531 9889.978 - 9949.556: 28.2118% ( 59) 00:13:51.531 9949.556 - 10009.135: 28.8086% ( 55) 00:13:51.531 10009.135 - 10068.713: 29.3511% ( 50) 00:13:51.531 10068.713 - 10128.291: 29.9588% ( 56) 00:13:51.531 10128.291 - 10187.869: 30.5230% ( 52) 00:13:51.531 10187.869 - 10247.447: 31.0113% ( 45) 00:13:51.531 10247.447 - 10307.025: 31.4887% ( 44) 00:13:51.531 10307.025 - 10366.604: 32.0421% ( 51) 00:13:51.531 10366.604 - 10426.182: 32.5087% ( 43) 00:13:51.531 10426.182 - 10485.760: 32.9753% ( 43) 00:13:51.531 10485.760 - 10545.338: 33.4635% ( 45) 00:13:51.531 10545.338 - 10604.916: 33.9193% ( 42) 00:13:51.531 10604.916 - 10664.495: 34.3641% ( 41) 00:13:51.531 10664.495 - 10724.073: 34.8416% ( 44) 00:13:51.531 10724.073 - 10783.651: 35.2973% ( 42) 00:13:51.531 10783.651 - 10843.229: 35.8290% ( 49) 00:13:51.531 10843.229 - 10902.807: 36.3824% ( 51) 00:13:51.531 10902.807 - 10962.385: 36.8707% ( 45) 00:13:51.531 10962.385 - 11021.964: 37.3372% ( 43) 00:13:51.531 11021.964 - 11081.542: 37.8581% ( 48) 00:13:51.531 11081.542 - 11141.120: 38.4440% ( 54) 00:13:51.531 11141.120 - 11200.698: 39.0516% ( 56) 00:13:51.531 11200.698 - 11260.276: 39.7569% ( 65) 00:13:51.531 11260.276 - 11319.855: 40.5056% ( 69) 00:13:51.531 11319.855 - 11379.433: 41.2760% ( 71) 00:13:51.531 11379.433 - 11439.011: 42.0681% ( 73) 00:13:51.531 11439.011 - 11498.589: 42.8277% ( 70) 00:13:51.531 11498.589 - 11558.167: 43.5113% ( 63) 00:13:51.531 11558.167 - 11617.745: 44.2925% ( 72) 00:13:51.531 11617.745 - 11677.324: 45.0846% ( 73) 00:13:51.531 11677.324 - 11736.902: 45.8333% ( 69) 00:13:51.531 11736.902 - 11796.480: 46.6254% ( 73) 00:13:51.531 11796.480 - 11856.058: 47.3524% ( 67) 00:13:51.531 11856.058 - 11915.636: 48.0252% ( 62) 00:13:51.531 11915.636 - 11975.215: 48.6003% ( 53) 00:13:51.531 11975.215 - 12034.793: 49.2296% ( 58) 00:13:51.531 12034.793 - 12094.371: 49.8047% ( 53) 00:13:51.531 12094.371 - 12153.949: 50.3798% ( 53) 00:13:51.531 12153.949 - 12213.527: 50.9223% ( 50) 00:13:51.531 12213.527 - 12273.105: 51.4106% ( 45) 00:13:51.531 12273.105 - 12332.684: 51.8446% ( 40) 00:13:51.531 12332.684 - 12392.262: 52.2352% ( 36) 00:13:51.531 12392.262 - 12451.840: 52.5391% ( 28) 00:13:51.531 12451.840 - 12511.418: 52.8103% ( 25) 00:13:51.531 12511.418 - 12570.996: 53.0165% ( 19) 00:13:51.531 12570.996 - 12630.575: 53.1793% ( 15) 00:13:51.531 12630.575 - 12690.153: 53.3637% ( 17) 00:13:51.531 12690.153 - 12749.731: 53.5482% ( 17) 00:13:51.531 12749.731 - 12809.309: 53.6675% ( 11) 00:13:51.531 12809.309 - 12868.887: 53.7652% ( 9) 00:13:51.531 12868.887 - 12928.465: 53.8737% ( 10) 00:13:51.531 12928.465 - 12988.044: 53.9714% ( 9) 00:13:51.531 12988.044 - 13047.622: 54.0690% ( 9) 00:13:51.531 13047.622 - 13107.200: 54.1667% ( 9) 00:13:51.531 13107.200 - 13166.778: 54.2752% ( 10) 00:13:51.531 13166.778 - 13226.356: 54.3728% ( 9) 00:13:51.531 13226.356 - 13285.935: 54.4705% ( 9) 00:13:51.531 13285.935 - 13345.513: 54.5573% ( 8) 00:13:51.531 13345.513 - 13405.091: 54.6549% ( 9) 00:13:51.531 13405.091 - 13464.669: 54.7743% ( 11) 00:13:51.531 13464.669 - 13524.247: 54.9045% ( 12) 00:13:51.531 13524.247 - 13583.825: 54.9913% ( 8) 00:13:51.531 13583.825 - 13643.404: 55.0781% ( 8) 00:13:51.531 13643.404 - 13702.982: 55.1758% ( 9) 00:13:51.531 13702.982 - 13762.560: 55.2517% ( 7) 00:13:51.531 13762.560 - 13822.138: 55.3602% ( 10) 00:13:51.531 13822.138 - 13881.716: 55.4688% ( 10) 00:13:51.531 13881.716 - 13941.295: 55.6315% ( 15) 00:13:51.531 13941.295 - 14000.873: 55.7943% ( 15) 00:13:51.532 14000.873 - 14060.451: 55.9679% ( 16) 00:13:51.532 14060.451 - 14120.029: 56.1198% ( 14) 00:13:51.532 14120.029 - 14179.607: 56.2934% ( 16) 00:13:51.532 14179.607 - 14239.185: 56.4453% ( 14) 00:13:51.532 14239.185 - 14298.764: 56.6298% ( 17) 00:13:51.532 14298.764 - 14358.342: 56.7817% ( 14) 00:13:51.532 14358.342 - 14417.920: 56.9336% ( 14) 00:13:51.532 14417.920 - 14477.498: 57.1181% ( 17) 00:13:51.532 14477.498 - 14537.076: 57.2374% ( 11) 00:13:51.532 14537.076 - 14596.655: 57.3785% ( 13) 00:13:51.532 14596.655 - 14656.233: 57.5629% ( 17) 00:13:51.532 14656.233 - 14715.811: 57.7365% ( 16) 00:13:51.532 14715.811 - 14775.389: 57.8993% ( 15) 00:13:51.532 14775.389 - 14834.967: 58.0621% ( 15) 00:13:51.532 14834.967 - 14894.545: 58.2357% ( 16) 00:13:51.532 14894.545 - 14954.124: 58.4201% ( 17) 00:13:51.532 14954.124 - 15013.702: 58.5938% ( 16) 00:13:51.532 15013.702 - 15073.280: 58.6914% ( 9) 00:13:51.532 15073.280 - 15132.858: 58.7782% ( 8) 00:13:51.532 15132.858 - 15192.436: 58.8650% ( 8) 00:13:51.532 15192.436 - 15252.015: 58.9627% ( 9) 00:13:51.532 15252.015 - 15371.171: 59.1363% ( 16) 00:13:51.532 15371.171 - 15490.327: 59.2665% ( 12) 00:13:51.532 15490.327 - 15609.484: 59.4076% ( 13) 00:13:51.532 15609.484 - 15728.640: 59.5703% ( 15) 00:13:51.532 15728.640 - 15847.796: 59.6897% ( 11) 00:13:51.532 15847.796 - 15966.953: 59.7873% ( 9) 00:13:51.532 15966.953 - 16086.109: 59.8850% ( 9) 00:13:51.532 16086.109 - 16205.265: 60.1562% ( 25) 00:13:51.532 16205.265 - 16324.422: 60.4167% ( 24) 00:13:51.532 16324.422 - 16443.578: 60.7205% ( 28) 00:13:51.532 16443.578 - 16562.735: 61.0569% ( 31) 00:13:51.532 16562.735 - 16681.891: 61.4258% ( 34) 00:13:51.532 16681.891 - 16801.047: 61.9466% ( 48) 00:13:51.532 16801.047 - 16920.204: 62.9666% ( 94) 00:13:51.532 16920.204 - 17039.360: 64.2361% ( 117) 00:13:51.532 17039.360 - 17158.516: 65.7335% ( 138) 00:13:51.532 17158.516 - 17277.673: 67.2092% ( 136) 00:13:51.532 17277.673 - 17396.829: 68.7391% ( 141) 00:13:51.532 17396.829 - 17515.985: 70.2040% ( 135) 00:13:51.532 17515.985 - 17635.142: 71.7990% ( 147) 00:13:51.532 17635.142 - 17754.298: 73.3507% ( 143) 00:13:51.532 17754.298 - 17873.455: 74.9132% ( 144) 00:13:51.532 17873.455 - 17992.611: 76.3997% ( 137) 00:13:51.532 17992.611 - 18111.767: 78.0056% ( 148) 00:13:51.532 18111.767 - 18230.924: 79.4922% ( 137) 00:13:51.532 18230.924 - 18350.080: 80.9896% ( 138) 00:13:51.532 18350.080 - 18469.236: 82.5304% ( 142) 00:13:51.532 18469.236 - 18588.393: 84.0061% ( 136) 00:13:51.532 18588.393 - 18707.549: 85.5035% ( 138) 00:13:51.532 18707.549 - 18826.705: 87.0226% ( 140) 00:13:51.532 18826.705 - 18945.862: 88.3572% ( 123) 00:13:51.532 18945.862 - 19065.018: 89.6810% ( 122) 00:13:51.532 19065.018 - 19184.175: 91.0048% ( 122) 00:13:51.532 19184.175 - 19303.331: 92.2852% ( 118) 00:13:51.532 19303.331 - 19422.487: 93.5981% ( 121) 00:13:51.532 19422.487 - 19541.644: 94.8242% ( 113) 00:13:51.532 19541.644 - 19660.800: 95.8767% ( 97) 00:13:51.532 19660.800 - 19779.956: 96.5495% ( 62) 00:13:51.532 19779.956 - 19899.113: 96.8533% ( 28) 00:13:51.532 19899.113 - 20018.269: 96.9944% ( 13) 00:13:51.532 20018.269 - 20137.425: 97.0920% ( 9) 00:13:51.532 20137.425 - 20256.582: 97.1680% ( 7) 00:13:51.532 20256.582 - 20375.738: 97.2005% ( 3) 00:13:51.532 20375.738 - 20494.895: 97.2222% ( 2) 00:13:51.532 21328.989 - 21448.145: 97.2331% ( 1) 00:13:51.532 21448.145 - 21567.302: 97.2873% ( 5) 00:13:51.532 21567.302 - 21686.458: 97.3416% ( 5) 00:13:51.532 21686.458 - 21805.615: 97.4067% ( 6) 00:13:51.532 21805.615 - 21924.771: 97.4609% ( 5) 00:13:51.532 21924.771 - 22043.927: 97.5152% ( 5) 00:13:51.532 22043.927 - 22163.084: 97.5694% ( 5) 00:13:51.532 22163.084 - 22282.240: 97.6345% ( 6) 00:13:51.532 22282.240 - 22401.396: 97.6888% ( 5) 00:13:51.532 22401.396 - 22520.553: 97.7431% ( 5) 00:13:51.532 22520.553 - 22639.709: 97.8082% ( 6) 00:13:51.532 22639.709 - 22758.865: 97.8624% ( 5) 00:13:51.532 22758.865 - 22878.022: 97.9167% ( 5) 00:13:51.532 22878.022 - 22997.178: 97.9818% ( 6) 00:13:51.532 22997.178 - 23116.335: 98.0469% ( 6) 00:13:51.532 23116.335 - 23235.491: 98.0903% ( 4) 00:13:51.532 23235.491 - 23354.647: 98.1554% ( 6) 00:13:51.532 23354.647 - 23473.804: 98.2205% ( 6) 00:13:51.532 23473.804 - 23592.960: 98.2639% ( 4) 00:13:51.532 23592.960 - 23712.116: 98.3181% ( 5) 00:13:51.532 23712.116 - 23831.273: 98.3832% ( 6) 00:13:51.532 23831.273 - 23950.429: 98.4484% ( 6) 00:13:51.532 23950.429 - 24069.585: 98.4918% ( 4) 00:13:51.532 24069.585 - 24188.742: 98.5569% ( 6) 00:13:51.532 24188.742 - 24307.898: 98.6003% ( 4) 00:13:51.532 24307.898 - 24427.055: 98.6111% ( 1) 00:13:51.532 25022.836 - 25141.993: 98.6328% ( 2) 00:13:51.532 25141.993 - 25261.149: 98.6654% ( 3) 00:13:51.532 25261.149 - 25380.305: 98.6979% ( 3) 00:13:51.532 25380.305 - 25499.462: 98.7305% ( 3) 00:13:51.532 25499.462 - 25618.618: 98.7630% ( 3) 00:13:51.532 25618.618 - 25737.775: 98.7847% ( 2) 00:13:51.532 25737.775 - 25856.931: 98.8173% ( 3) 00:13:51.532 25856.931 - 25976.087: 98.8498% ( 3) 00:13:51.532 25976.087 - 26095.244: 98.8824% ( 3) 00:13:51.532 26095.244 - 26214.400: 98.9041% ( 2) 00:13:51.532 26214.400 - 26333.556: 98.9366% ( 3) 00:13:51.532 26333.556 - 26452.713: 98.9692% ( 3) 00:13:51.532 26452.713 - 26571.869: 99.0017% ( 3) 00:13:51.532 26571.869 - 26691.025: 99.0343% ( 3) 00:13:51.532 26691.025 - 26810.182: 99.0560% ( 2) 00:13:51.532 26810.182 - 26929.338: 99.0885% ( 3) 00:13:51.532 26929.338 - 27048.495: 99.1211% ( 3) 00:13:51.532 27048.495 - 27167.651: 99.1536% ( 3) 00:13:51.532 27167.651 - 27286.807: 99.1862% ( 3) 00:13:51.532 27286.807 - 27405.964: 99.2188% ( 3) 00:13:51.532 27405.964 - 27525.120: 99.2513% ( 3) 00:13:51.532 27525.120 - 27644.276: 99.2839% ( 3) 00:13:51.532 27644.276 - 27763.433: 99.3056% ( 2) 00:13:51.532 32648.844 - 32887.156: 99.3490% ( 4) 00:13:51.532 32887.156 - 33125.469: 99.4032% ( 5) 00:13:51.532 33125.469 - 33363.782: 99.4683% ( 6) 00:13:51.532 33363.782 - 33602.095: 99.5117% ( 4) 00:13:51.532 33602.095 - 33840.407: 99.5768% ( 6) 00:13:51.532 33840.407 - 34078.720: 99.6202% ( 4) 00:13:51.532 34078.720 - 34317.033: 99.6853% ( 6) 00:13:51.532 34317.033 - 34555.345: 99.7179% ( 3) 00:13:51.532 34555.345 - 34793.658: 99.7721% ( 5) 00:13:51.532 34793.658 - 35031.971: 99.8372% ( 6) 00:13:51.532 35031.971 - 35270.284: 99.8915% ( 5) 00:13:51.532 35270.284 - 35508.596: 99.9566% ( 6) 00:13:51.532 35508.596 - 35746.909: 100.0000% ( 4) 00:13:51.532 00:13:51.532 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:13:51.532 ============================================================================== 00:13:51.532 Range in us Cumulative IO count 00:13:51.532 8162.211 - 8221.789: 0.0109% ( 1) 00:13:51.532 8221.789 - 8281.367: 0.0434% ( 3) 00:13:51.532 8281.367 - 8340.945: 0.1085% ( 6) 00:13:51.532 8340.945 - 8400.524: 0.4340% ( 30) 00:13:51.532 8400.524 - 8460.102: 0.9115% ( 44) 00:13:51.532 8460.102 - 8519.680: 1.6385% ( 67) 00:13:51.532 8519.680 - 8579.258: 2.4631% ( 76) 00:13:51.532 8579.258 - 8638.836: 3.3746% ( 84) 00:13:51.532 8638.836 - 8698.415: 4.2535% ( 81) 00:13:51.532 8698.415 - 8757.993: 5.1866% ( 86) 00:13:51.532 8757.993 - 8817.571: 6.2391% ( 97) 00:13:51.532 8817.571 - 8877.149: 7.2591% ( 94) 00:13:51.532 8877.149 - 8936.727: 8.3767% ( 103) 00:13:51.532 8936.727 - 8996.305: 9.5920% ( 112) 00:13:51.532 8996.305 - 9055.884: 10.8507% ( 116) 00:13:51.532 9055.884 - 9115.462: 12.1202% ( 117) 00:13:51.532 9115.462 - 9175.040: 13.5200% ( 129) 00:13:51.532 9175.040 - 9234.618: 14.8763% ( 125) 00:13:51.532 9234.618 - 9294.196: 16.2543% ( 127) 00:13:51.532 9294.196 - 9353.775: 17.5998% ( 124) 00:13:51.532 9353.775 - 9413.353: 18.9887% ( 128) 00:13:51.532 9413.353 - 9472.931: 20.3776% ( 128) 00:13:51.532 9472.931 - 9532.509: 21.8967% ( 140) 00:13:51.532 9532.509 - 9592.087: 23.1337% ( 114) 00:13:51.532 9592.087 - 9651.665: 24.2405% ( 102) 00:13:51.532 9651.665 - 9711.244: 25.1736% ( 86) 00:13:51.532 9711.244 - 9770.822: 26.0742% ( 83) 00:13:51.532 9770.822 - 9830.400: 26.9748% ( 83) 00:13:51.532 9830.400 - 9889.978: 27.8320% ( 79) 00:13:51.532 9889.978 - 9949.556: 28.6241% ( 73) 00:13:51.532 9949.556 - 10009.135: 29.4162% ( 73) 00:13:51.532 10009.135 - 10068.713: 30.1215% ( 65) 00:13:51.532 10068.713 - 10128.291: 30.8160% ( 64) 00:13:51.532 10128.291 - 10187.869: 31.4128% ( 55) 00:13:51.532 10187.869 - 10247.447: 31.9336% ( 48) 00:13:51.533 10247.447 - 10307.025: 32.4327% ( 46) 00:13:51.533 10307.025 - 10366.604: 32.9210% ( 45) 00:13:51.533 10366.604 - 10426.182: 33.3333% ( 38) 00:13:51.533 10426.182 - 10485.760: 33.8108% ( 44) 00:13:51.533 10485.760 - 10545.338: 34.2665% ( 42) 00:13:51.533 10545.338 - 10604.916: 34.8090% ( 50) 00:13:51.533 10604.916 - 10664.495: 35.3407% ( 49) 00:13:51.533 10664.495 - 10724.073: 35.9158% ( 53) 00:13:51.533 10724.073 - 10783.651: 36.3824% ( 43) 00:13:51.533 10783.651 - 10843.229: 36.7622% ( 35) 00:13:51.533 10843.229 - 10902.807: 37.1745% ( 38) 00:13:51.533 10902.807 - 10962.385: 37.6302% ( 42) 00:13:51.533 10962.385 - 11021.964: 38.1076% ( 44) 00:13:51.533 11021.964 - 11081.542: 38.6176% ( 47) 00:13:51.533 11081.542 - 11141.120: 39.1602% ( 50) 00:13:51.533 11141.120 - 11200.698: 39.7135% ( 51) 00:13:51.533 11200.698 - 11260.276: 40.3212% ( 56) 00:13:51.533 11260.276 - 11319.855: 40.9614% ( 59) 00:13:51.533 11319.855 - 11379.433: 41.6450% ( 63) 00:13:51.533 11379.433 - 11439.011: 42.4154% ( 71) 00:13:51.533 11439.011 - 11498.589: 43.0990% ( 63) 00:13:51.533 11498.589 - 11558.167: 43.8151% ( 66) 00:13:51.533 11558.167 - 11617.745: 44.5421% ( 67) 00:13:51.533 11617.745 - 11677.324: 45.2582% ( 66) 00:13:51.533 11677.324 - 11736.902: 45.9744% ( 66) 00:13:51.533 11736.902 - 11796.480: 46.7339% ( 70) 00:13:51.533 11796.480 - 11856.058: 47.4175% ( 63) 00:13:51.533 11856.058 - 11915.636: 48.0686% ( 60) 00:13:51.533 11915.636 - 11975.215: 48.6437% ( 53) 00:13:51.533 11975.215 - 12034.793: 49.2513% ( 56) 00:13:51.533 12034.793 - 12094.371: 49.8264% ( 53) 00:13:51.533 12094.371 - 12153.949: 50.3689% ( 50) 00:13:51.533 12153.949 - 12213.527: 50.8572% ( 45) 00:13:51.533 12213.527 - 12273.105: 51.2587% ( 37) 00:13:51.533 12273.105 - 12332.684: 51.6276% ( 34) 00:13:51.533 12332.684 - 12392.262: 51.9531% ( 30) 00:13:51.533 12392.262 - 12451.840: 52.2569% ( 28) 00:13:51.533 12451.840 - 12511.418: 52.4740% ( 20) 00:13:51.533 12511.418 - 12570.996: 52.6801% ( 19) 00:13:51.533 12570.996 - 12630.575: 52.8646% ( 17) 00:13:51.533 12630.575 - 12690.153: 53.0273% ( 15) 00:13:51.533 12690.153 - 12749.731: 53.2118% ( 17) 00:13:51.533 12749.731 - 12809.309: 53.3854% ( 16) 00:13:51.533 12809.309 - 12868.887: 53.5156% ( 12) 00:13:51.533 12868.887 - 12928.465: 53.6024% ( 8) 00:13:51.533 12928.465 - 12988.044: 53.6675% ( 6) 00:13:51.533 12988.044 - 13047.622: 53.7326% ( 6) 00:13:51.533 13047.622 - 13107.200: 53.7869% ( 5) 00:13:51.533 13107.200 - 13166.778: 53.8411% ( 5) 00:13:51.533 13166.778 - 13226.356: 53.8737% ( 3) 00:13:51.533 13226.356 - 13285.935: 53.9062% ( 3) 00:13:51.533 13285.935 - 13345.513: 53.9388% ( 3) 00:13:51.533 13345.513 - 13405.091: 53.9822% ( 4) 00:13:51.533 13405.091 - 13464.669: 54.0148% ( 3) 00:13:51.533 13464.669 - 13524.247: 54.0473% ( 3) 00:13:51.533 13524.247 - 13583.825: 54.1016% ( 5) 00:13:51.533 13583.825 - 13643.404: 54.1667% ( 6) 00:13:51.533 13643.404 - 13702.982: 54.2426% ( 7) 00:13:51.533 13702.982 - 13762.560: 54.3077% ( 6) 00:13:51.533 13762.560 - 13822.138: 54.3837% ( 7) 00:13:51.533 13822.138 - 13881.716: 54.4379% ( 5) 00:13:51.533 13881.716 - 13941.295: 54.5573% ( 11) 00:13:51.533 13941.295 - 14000.873: 54.7201% ( 15) 00:13:51.533 14000.873 - 14060.451: 54.9045% ( 17) 00:13:51.533 14060.451 - 14120.029: 55.0673% ( 15) 00:13:51.533 14120.029 - 14179.607: 55.2517% ( 17) 00:13:51.533 14179.607 - 14239.185: 55.4145% ( 15) 00:13:51.533 14239.185 - 14298.764: 55.6098% ( 18) 00:13:51.533 14298.764 - 14358.342: 55.8051% ( 18) 00:13:51.533 14358.342 - 14417.920: 56.0221% ( 20) 00:13:51.533 14417.920 - 14477.498: 56.2391% ( 20) 00:13:51.533 14477.498 - 14537.076: 56.4670% ( 21) 00:13:51.533 14537.076 - 14596.655: 56.6840% ( 20) 00:13:51.533 14596.655 - 14656.233: 56.9336% ( 23) 00:13:51.533 14656.233 - 14715.811: 57.1615% ( 21) 00:13:51.533 14715.811 - 14775.389: 57.4327% ( 25) 00:13:51.533 14775.389 - 14834.967: 57.6606% ( 21) 00:13:51.533 14834.967 - 14894.545: 57.8993% ( 22) 00:13:51.533 14894.545 - 14954.124: 58.1055% ( 19) 00:13:51.533 14954.124 - 15013.702: 58.3008% ( 18) 00:13:51.533 15013.702 - 15073.280: 58.4961% ( 18) 00:13:51.533 15073.280 - 15132.858: 58.6589% ( 15) 00:13:51.533 15132.858 - 15192.436: 58.7348% ( 7) 00:13:51.533 15192.436 - 15252.015: 58.8325% ( 9) 00:13:51.533 15252.015 - 15371.171: 59.0061% ( 16) 00:13:51.533 15371.171 - 15490.327: 59.1797% ( 16) 00:13:51.533 15490.327 - 15609.484: 59.3424% ( 15) 00:13:51.533 15609.484 - 15728.640: 59.4510% ( 10) 00:13:51.533 15728.640 - 15847.796: 59.5486% ( 9) 00:13:51.533 15847.796 - 15966.953: 59.7222% ( 16) 00:13:51.533 15966.953 - 16086.109: 59.8741% ( 14) 00:13:51.533 16086.109 - 16205.265: 60.1237% ( 23) 00:13:51.533 16205.265 - 16324.422: 60.4058% ( 26) 00:13:51.533 16324.422 - 16443.578: 60.6988% ( 27) 00:13:51.533 16443.578 - 16562.735: 61.0135% ( 29) 00:13:51.533 16562.735 - 16681.891: 61.3607% ( 32) 00:13:51.533 16681.891 - 16801.047: 61.8490% ( 45) 00:13:51.533 16801.047 - 16920.204: 62.6628% ( 75) 00:13:51.533 16920.204 - 17039.360: 63.9757% ( 121) 00:13:51.533 17039.360 - 17158.516: 65.5707% ( 147) 00:13:51.533 17158.516 - 17277.673: 67.2526% ( 155) 00:13:51.533 17277.673 - 17396.829: 68.7391% ( 137) 00:13:51.533 17396.829 - 17515.985: 70.2257% ( 137) 00:13:51.533 17515.985 - 17635.142: 71.7882% ( 144) 00:13:51.533 17635.142 - 17754.298: 73.2747% ( 137) 00:13:51.533 17754.298 - 17873.455: 74.8047% ( 141) 00:13:51.533 17873.455 - 17992.611: 76.4106% ( 148) 00:13:51.533 17992.611 - 18111.767: 77.9839% ( 145) 00:13:51.533 18111.767 - 18230.924: 79.4271% ( 133) 00:13:51.533 18230.924 - 18350.080: 81.0438% ( 149) 00:13:51.533 18350.080 - 18469.236: 82.5304% ( 137) 00:13:51.533 18469.236 - 18588.393: 84.0278% ( 138) 00:13:51.533 18588.393 - 18707.549: 85.5469% ( 140) 00:13:51.533 18707.549 - 18826.705: 87.0334% ( 137) 00:13:51.533 18826.705 - 18945.862: 88.3464% ( 121) 00:13:51.533 18945.862 - 19065.018: 89.6593% ( 121) 00:13:51.533 19065.018 - 19184.175: 90.9831% ( 122) 00:13:51.533 19184.175 - 19303.331: 92.3611% ( 127) 00:13:51.533 19303.331 - 19422.487: 93.5764% ( 112) 00:13:51.533 19422.487 - 19541.644: 94.8568% ( 118) 00:13:51.533 19541.644 - 19660.800: 95.9527% ( 101) 00:13:51.533 19660.800 - 19779.956: 96.5278% ( 53) 00:13:51.533 19779.956 - 19899.113: 96.8641% ( 31) 00:13:51.533 19899.113 - 20018.269: 97.0052% ( 13) 00:13:51.533 20018.269 - 20137.425: 97.1029% ( 9) 00:13:51.533 20137.425 - 20256.582: 97.1788% ( 7) 00:13:51.533 20256.582 - 20375.738: 97.2114% ( 3) 00:13:51.533 20375.738 - 20494.895: 97.2222% ( 1) 00:13:51.533 21209.833 - 21328.989: 97.2331% ( 1) 00:13:51.533 21328.989 - 21448.145: 97.2656% ( 3) 00:13:51.533 21448.145 - 21567.302: 97.3307% ( 6) 00:13:51.533 21567.302 - 21686.458: 97.4284% ( 9) 00:13:51.533 21686.458 - 21805.615: 97.5477% ( 11) 00:13:51.533 21805.615 - 21924.771: 97.6237% ( 7) 00:13:51.533 21924.771 - 22043.927: 97.7214% ( 9) 00:13:51.533 22043.927 - 22163.084: 97.7973% ( 7) 00:13:51.533 22163.084 - 22282.240: 97.8841% ( 8) 00:13:51.533 22282.240 - 22401.396: 97.9709% ( 8) 00:13:51.533 22401.396 - 22520.553: 98.0577% ( 8) 00:13:51.533 22520.553 - 22639.709: 98.1337% ( 7) 00:13:51.533 22639.709 - 22758.865: 98.2313% ( 9) 00:13:51.534 22758.865 - 22878.022: 98.3073% ( 7) 00:13:51.534 22878.022 - 22997.178: 98.3941% ( 8) 00:13:51.534 22997.178 - 23116.335: 98.4809% ( 8) 00:13:51.534 23116.335 - 23235.491: 98.5569% ( 7) 00:13:51.534 23235.491 - 23354.647: 98.6545% ( 9) 00:13:51.534 23354.647 - 23473.804: 98.7305% ( 7) 00:13:51.534 23473.804 - 23592.960: 98.8173% ( 8) 00:13:51.534 23592.960 - 23712.116: 98.9149% ( 9) 00:13:51.534 23712.116 - 23831.273: 99.0017% ( 8) 00:13:51.534 23831.273 - 23950.429: 99.0777% ( 7) 00:13:51.534 23950.429 - 24069.585: 99.1753% ( 9) 00:13:51.534 24069.585 - 24188.742: 99.2513% ( 7) 00:13:51.534 24188.742 - 24307.898: 99.2947% ( 4) 00:13:51.534 24307.898 - 24427.055: 99.3056% ( 1) 00:13:51.534 29431.622 - 29550.778: 99.3164% ( 1) 00:13:51.534 29550.778 - 29669.935: 99.3381% ( 2) 00:13:51.534 29669.935 - 29789.091: 99.3707% ( 3) 00:13:51.534 29789.091 - 29908.247: 99.3924% ( 2) 00:13:51.534 29908.247 - 30027.404: 99.4249% ( 3) 00:13:51.534 30027.404 - 30146.560: 99.4575% ( 3) 00:13:51.534 30146.560 - 30265.716: 99.4792% ( 2) 00:13:51.534 30265.716 - 30384.873: 99.5009% ( 2) 00:13:51.534 30384.873 - 30504.029: 99.5334% ( 3) 00:13:51.534 30504.029 - 30742.342: 99.5877% ( 5) 00:13:51.534 30742.342 - 30980.655: 99.6528% ( 6) 00:13:51.534 30980.655 - 31218.967: 99.7070% ( 5) 00:13:51.534 31218.967 - 31457.280: 99.7613% ( 5) 00:13:51.534 31457.280 - 31695.593: 99.8264% ( 6) 00:13:51.534 31695.593 - 31933.905: 99.8806% ( 5) 00:13:51.534 31933.905 - 32172.218: 99.9457% ( 6) 00:13:51.534 32172.218 - 32410.531: 100.0000% ( 5) 00:13:51.534 00:13:51.534 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:13:51.534 ============================================================================== 00:13:51.534 Range in us Cumulative IO count 00:13:51.534 8221.789 - 8281.367: 0.0217% ( 2) 00:13:51.534 8281.367 - 8340.945: 0.0977% ( 7) 00:13:51.534 8340.945 - 8400.524: 0.4123% ( 29) 00:13:51.534 8400.524 - 8460.102: 1.0091% ( 55) 00:13:51.534 8460.102 - 8519.680: 1.7904% ( 72) 00:13:51.534 8519.680 - 8579.258: 2.5933% ( 74) 00:13:51.534 8579.258 - 8638.836: 3.4614% ( 80) 00:13:51.534 8638.836 - 8698.415: 4.3837% ( 85) 00:13:51.534 8698.415 - 8757.993: 5.3602% ( 90) 00:13:51.534 8757.993 - 8817.571: 6.3368% ( 90) 00:13:51.534 8817.571 - 8877.149: 7.3676% ( 95) 00:13:51.534 8877.149 - 8936.727: 8.4310% ( 98) 00:13:51.534 8936.727 - 8996.305: 9.5269% ( 101) 00:13:51.534 8996.305 - 9055.884: 10.7422% ( 112) 00:13:51.534 9055.884 - 9115.462: 11.9358% ( 110) 00:13:51.534 9115.462 - 9175.040: 13.1510% ( 112) 00:13:51.534 9175.040 - 9234.618: 14.5074% ( 125) 00:13:51.534 9234.618 - 9294.196: 15.8095% ( 120) 00:13:51.534 9294.196 - 9353.775: 17.1875% ( 127) 00:13:51.534 9353.775 - 9413.353: 18.7500% ( 144) 00:13:51.534 9413.353 - 9472.931: 20.2257% ( 136) 00:13:51.534 9472.931 - 9532.509: 21.6905% ( 135) 00:13:51.534 9532.509 - 9592.087: 23.0794% ( 128) 00:13:51.534 9592.087 - 9651.665: 24.1753% ( 101) 00:13:51.534 9651.665 - 9711.244: 25.0434% ( 80) 00:13:51.534 9711.244 - 9770.822: 25.7921% ( 69) 00:13:51.534 9770.822 - 9830.400: 26.5299% ( 68) 00:13:51.534 9830.400 - 9889.978: 27.2678% ( 68) 00:13:51.534 9889.978 - 9949.556: 27.9622% ( 64) 00:13:51.534 9949.556 - 10009.135: 28.6784% ( 66) 00:13:51.534 10009.135 - 10068.713: 29.3620% ( 63) 00:13:51.534 10068.713 - 10128.291: 30.0564% ( 64) 00:13:51.534 10128.291 - 10187.869: 30.7617% ( 65) 00:13:51.534 10187.869 - 10247.447: 31.3694% ( 56) 00:13:51.534 10247.447 - 10307.025: 31.9770% ( 56) 00:13:51.534 10307.025 - 10366.604: 32.6063% ( 58) 00:13:51.534 10366.604 - 10426.182: 33.1706% ( 52) 00:13:51.534 10426.182 - 10485.760: 33.7131% ( 50) 00:13:51.534 10485.760 - 10545.338: 34.1580% ( 41) 00:13:51.534 10545.338 - 10604.916: 34.6137% ( 42) 00:13:51.534 10604.916 - 10664.495: 35.1020% ( 45) 00:13:51.534 10664.495 - 10724.073: 35.5686% ( 43) 00:13:51.534 10724.073 - 10783.651: 36.0894% ( 48) 00:13:51.534 10783.651 - 10843.229: 36.5994% ( 47) 00:13:51.534 10843.229 - 10902.807: 37.0877% ( 45) 00:13:51.534 10902.807 - 10962.385: 37.5868% ( 46) 00:13:51.534 10962.385 - 11021.964: 38.0534% ( 43) 00:13:51.534 11021.964 - 11081.542: 38.5091% ( 42) 00:13:51.534 11081.542 - 11141.120: 39.0299% ( 48) 00:13:51.534 11141.120 - 11200.698: 39.6050% ( 53) 00:13:51.534 11200.698 - 11260.276: 40.1801% ( 53) 00:13:51.534 11260.276 - 11319.855: 40.8203% ( 59) 00:13:51.534 11319.855 - 11379.433: 41.5148% ( 64) 00:13:51.534 11379.433 - 11439.011: 42.2201% ( 65) 00:13:51.534 11439.011 - 11498.589: 42.9253% ( 65) 00:13:51.534 11498.589 - 11558.167: 43.6632% ( 68) 00:13:51.534 11558.167 - 11617.745: 44.3685% ( 65) 00:13:51.534 11617.745 - 11677.324: 45.1389% ( 71) 00:13:51.534 11677.324 - 11736.902: 45.8333% ( 64) 00:13:51.534 11736.902 - 11796.480: 46.5278% ( 64) 00:13:51.534 11796.480 - 11856.058: 47.1680% ( 59) 00:13:51.534 11856.058 - 11915.636: 47.7973% ( 58) 00:13:51.534 11915.636 - 11975.215: 48.3941% ( 55) 00:13:51.534 11975.215 - 12034.793: 48.9475% ( 51) 00:13:51.534 12034.793 - 12094.371: 49.5117% ( 52) 00:13:51.534 12094.371 - 12153.949: 50.0326% ( 48) 00:13:51.534 12153.949 - 12213.527: 50.5968% ( 52) 00:13:51.534 12213.527 - 12273.105: 51.0959% ( 46) 00:13:51.534 12273.105 - 12332.684: 51.5842% ( 45) 00:13:51.534 12332.684 - 12392.262: 51.9857% ( 37) 00:13:51.534 12392.262 - 12451.840: 52.3980% ( 38) 00:13:51.534 12451.840 - 12511.418: 52.7018% ( 28) 00:13:51.534 12511.418 - 12570.996: 52.9080% ( 19) 00:13:51.534 12570.996 - 12630.575: 53.0816% ( 16) 00:13:51.534 12630.575 - 12690.153: 53.2552% ( 16) 00:13:51.534 12690.153 - 12749.731: 53.3746% ( 11) 00:13:51.534 12749.731 - 12809.309: 53.4831% ( 10) 00:13:51.534 12809.309 - 12868.887: 53.5699% ( 8) 00:13:51.534 12868.887 - 12928.465: 53.6675% ( 9) 00:13:51.534 12928.465 - 12988.044: 53.7435% ( 7) 00:13:51.534 12988.044 - 13047.622: 53.8086% ( 6) 00:13:51.534 13047.622 - 13107.200: 53.8737% ( 6) 00:13:51.534 13107.200 - 13166.778: 53.9388% ( 6) 00:13:51.534 13166.778 - 13226.356: 53.9931% ( 5) 00:13:51.534 13226.356 - 13285.935: 54.0473% ( 5) 00:13:51.534 13285.935 - 13345.513: 54.0799% ( 3) 00:13:51.534 13345.513 - 13405.091: 54.1233% ( 4) 00:13:51.534 13405.091 - 13464.669: 54.1775% ( 5) 00:13:51.534 13464.669 - 13524.247: 54.2426% ( 6) 00:13:51.534 13524.247 - 13583.825: 54.2860% ( 4) 00:13:51.534 13583.825 - 13643.404: 54.3294% ( 4) 00:13:51.534 13643.404 - 13702.982: 54.4054% ( 7) 00:13:51.534 13702.982 - 13762.560: 54.5030% ( 9) 00:13:51.534 13762.560 - 13822.138: 54.6007% ( 9) 00:13:51.534 13822.138 - 13881.716: 54.6984% ( 9) 00:13:51.534 13881.716 - 13941.295: 54.8286% ( 12) 00:13:51.534 13941.295 - 14000.873: 54.9913% ( 15) 00:13:51.534 14000.873 - 14060.451: 55.1541% ( 15) 00:13:51.534 14060.451 - 14120.029: 55.3277% ( 16) 00:13:51.534 14120.029 - 14179.607: 55.5230% ( 18) 00:13:51.534 14179.607 - 14239.185: 55.6858% ( 15) 00:13:51.534 14239.185 - 14298.764: 55.8377% ( 14) 00:13:51.534 14298.764 - 14358.342: 56.0221% ( 17) 00:13:51.534 14358.342 - 14417.920: 56.1849% ( 15) 00:13:51.534 14417.920 - 14477.498: 56.3368% ( 14) 00:13:51.534 14477.498 - 14537.076: 56.4996% ( 15) 00:13:51.534 14537.076 - 14596.655: 56.6840% ( 17) 00:13:51.534 14596.655 - 14656.233: 56.8576% ( 16) 00:13:51.534 14656.233 - 14715.811: 57.0421% ( 17) 00:13:51.534 14715.811 - 14775.389: 57.2157% ( 16) 00:13:51.534 14775.389 - 14834.967: 57.4110% ( 18) 00:13:51.534 14834.967 - 14894.545: 57.6063% ( 18) 00:13:51.534 14894.545 - 14954.124: 57.7799% ( 16) 00:13:51.534 14954.124 - 15013.702: 57.9753% ( 18) 00:13:51.534 15013.702 - 15073.280: 58.1706% ( 18) 00:13:51.534 15073.280 - 15132.858: 58.3659% ( 18) 00:13:51.534 15132.858 - 15192.436: 58.5069% ( 13) 00:13:51.534 15192.436 - 15252.015: 58.6589% ( 14) 00:13:51.534 15252.015 - 15371.171: 58.8542% ( 18) 00:13:51.534 15371.171 - 15490.327: 58.9410% ( 8) 00:13:51.534 15490.327 - 15609.484: 59.0495% ( 10) 00:13:51.534 15609.484 - 15728.640: 59.1688% ( 11) 00:13:51.534 15728.640 - 15847.796: 59.3099% ( 13) 00:13:51.534 15847.796 - 15966.953: 59.4618% ( 14) 00:13:51.534 15966.953 - 16086.109: 59.6463% ( 17) 00:13:51.534 16086.109 - 16205.265: 59.9175% ( 25) 00:13:51.534 16205.265 - 16324.422: 60.2539% ( 31) 00:13:51.534 16324.422 - 16443.578: 60.6228% ( 34) 00:13:51.534 16443.578 - 16562.735: 60.9918% ( 34) 00:13:51.534 16562.735 - 16681.891: 61.2956% ( 28) 00:13:51.534 16681.891 - 16801.047: 61.8056% ( 47) 00:13:51.534 16801.047 - 16920.204: 62.7604% ( 88) 00:13:51.534 16920.204 - 17039.360: 64.0842% ( 122) 00:13:51.534 17039.360 - 17158.516: 65.6684% ( 146) 00:13:51.534 17158.516 - 17277.673: 67.3611% ( 156) 00:13:51.534 17277.673 - 17396.829: 68.8911% ( 141) 00:13:51.534 17396.829 - 17515.985: 70.4210% ( 141) 00:13:51.534 17515.985 - 17635.142: 71.9184% ( 138) 00:13:51.534 17635.142 - 17754.298: 73.4484% ( 141) 00:13:51.534 17754.298 - 17873.455: 74.9891% ( 142) 00:13:51.534 17873.455 - 17992.611: 76.5191% ( 141) 00:13:51.534 17992.611 - 18111.767: 78.0816% ( 144) 00:13:51.534 18111.767 - 18230.924: 79.6224% ( 142) 00:13:51.534 18230.924 - 18350.080: 81.1632% ( 142) 00:13:51.535 18350.080 - 18469.236: 82.6606% ( 138) 00:13:51.535 18469.236 - 18588.393: 84.2556% ( 147) 00:13:51.535 18588.393 - 18707.549: 85.6554% ( 129) 00:13:51.535 18707.549 - 18826.705: 87.1528% ( 138) 00:13:51.535 18826.705 - 18945.862: 88.5634% ( 130) 00:13:51.535 18945.862 - 19065.018: 89.8655% ( 120) 00:13:51.535 19065.018 - 19184.175: 91.2109% ( 124) 00:13:51.535 19184.175 - 19303.331: 92.5781% ( 126) 00:13:51.535 19303.331 - 19422.487: 93.8477% ( 117) 00:13:51.535 19422.487 - 19541.644: 95.1389% ( 119) 00:13:51.535 19541.644 - 19660.800: 96.2674% ( 104) 00:13:51.535 19660.800 - 19779.956: 96.9510% ( 63) 00:13:51.535 19779.956 - 19899.113: 97.2873% ( 31) 00:13:51.535 19899.113 - 20018.269: 97.4935% ( 19) 00:13:51.535 20018.269 - 20137.425: 97.6345% ( 13) 00:13:51.535 20137.425 - 20256.582: 97.7648% ( 12) 00:13:51.535 20256.582 - 20375.738: 97.8624% ( 9) 00:13:51.535 20375.738 - 20494.895: 97.8950% ( 3) 00:13:51.535 20494.895 - 20614.051: 97.9167% ( 2) 00:13:51.535 21328.989 - 21448.145: 97.9275% ( 1) 00:13:51.535 21448.145 - 21567.302: 97.9709% ( 4) 00:13:51.535 21567.302 - 21686.458: 98.0360% ( 6) 00:13:51.535 21686.458 - 21805.615: 98.0903% ( 5) 00:13:51.535 21805.615 - 21924.771: 98.1662% ( 7) 00:13:51.535 21924.771 - 22043.927: 98.2313% ( 6) 00:13:51.535 22043.927 - 22163.084: 98.2856% ( 5) 00:13:51.535 22163.084 - 22282.240: 98.3290% ( 4) 00:13:51.535 22282.240 - 22401.396: 98.3832% ( 5) 00:13:51.535 22401.396 - 22520.553: 98.4484% ( 6) 00:13:51.535 22520.553 - 22639.709: 98.5026% ( 5) 00:13:51.535 22639.709 - 22758.865: 98.5569% ( 5) 00:13:51.535 22758.865 - 22878.022: 98.6220% ( 6) 00:13:51.535 22878.022 - 22997.178: 98.6654% ( 4) 00:13:51.535 22997.178 - 23116.335: 98.7305% ( 6) 00:13:51.535 23116.335 - 23235.491: 98.7847% ( 5) 00:13:51.535 23235.491 - 23354.647: 98.8390% ( 5) 00:13:51.535 23354.647 - 23473.804: 98.9041% ( 6) 00:13:51.535 23473.804 - 23592.960: 98.9583% ( 5) 00:13:51.535 23592.960 - 23712.116: 99.0126% ( 5) 00:13:51.535 23712.116 - 23831.273: 99.0777% ( 6) 00:13:51.535 23831.273 - 23950.429: 99.1319% ( 5) 00:13:51.535 23950.429 - 24069.585: 99.1862% ( 5) 00:13:51.535 24069.585 - 24188.742: 99.2513% ( 6) 00:13:51.535 24188.742 - 24307.898: 99.3056% ( 5) 00:13:51.535 25856.931 - 25976.087: 99.3164% ( 1) 00:13:51.535 25976.087 - 26095.244: 99.3381% ( 2) 00:13:51.535 26095.244 - 26214.400: 99.3598% ( 2) 00:13:51.535 26214.400 - 26333.556: 99.3924% ( 3) 00:13:51.535 26333.556 - 26452.713: 99.4249% ( 3) 00:13:51.535 26452.713 - 26571.869: 99.4575% ( 3) 00:13:51.535 26571.869 - 26691.025: 99.4792% ( 2) 00:13:51.535 26691.025 - 26810.182: 99.5117% ( 3) 00:13:51.535 26810.182 - 26929.338: 99.5443% ( 3) 00:13:51.535 26929.338 - 27048.495: 99.5768% ( 3) 00:13:51.535 27048.495 - 27167.651: 99.6094% ( 3) 00:13:51.535 27167.651 - 27286.807: 99.6419% ( 3) 00:13:51.535 27286.807 - 27405.964: 99.6636% ( 2) 00:13:51.535 27405.964 - 27525.120: 99.6962% ( 3) 00:13:51.535 27525.120 - 27644.276: 99.7179% ( 2) 00:13:51.535 27644.276 - 27763.433: 99.7504% ( 3) 00:13:51.535 27763.433 - 27882.589: 99.7830% ( 3) 00:13:51.535 27882.589 - 28001.745: 99.8047% ( 2) 00:13:51.535 28001.745 - 28120.902: 99.8372% ( 3) 00:13:51.535 28120.902 - 28240.058: 99.8589% ( 2) 00:13:51.535 28240.058 - 28359.215: 99.8915% ( 3) 00:13:51.535 28359.215 - 28478.371: 99.9240% ( 3) 00:13:51.535 28478.371 - 28597.527: 99.9566% ( 3) 00:13:51.535 28597.527 - 28716.684: 99.9891% ( 3) 00:13:51.535 28716.684 - 28835.840: 100.0000% ( 1) 00:13:51.535 00:13:51.535 03:42:06 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:13:52.913 Initializing NVMe Controllers 00:13:52.913 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:52.913 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:52.913 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:52.913 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:52.913 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:13:52.913 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:13:52.913 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:13:52.913 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:13:52.913 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:13:52.913 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:13:52.913 Initialization complete. Launching workers. 00:13:52.913 ======================================================== 00:13:52.913 Latency(us) 00:13:52.913 Device Information : IOPS MiB/s Average min max 00:13:52.913 PCIE (0000:00:10.0) NSID 1 from core 0: 9073.53 106.33 14123.33 10225.94 54402.61 00:13:52.913 PCIE (0000:00:11.0) NSID 1 from core 0: 9073.53 106.33 14064.78 10320.74 50310.17 00:13:52.913 PCIE (0000:00:13.0) NSID 1 from core 0: 9073.53 106.33 14007.73 10409.08 47043.10 00:13:52.913 PCIE (0000:00:12.0) NSID 1 from core 0: 9073.53 106.33 13950.85 10553.37 43369.28 00:13:52.913 PCIE (0000:00:12.0) NSID 2 from core 0: 9073.53 106.33 13894.71 10480.54 39713.13 00:13:52.913 PCIE (0000:00:12.0) NSID 3 from core 0: 9073.53 106.33 13839.20 10351.44 36036.14 00:13:52.913 ======================================================== 00:13:52.913 Total : 54441.17 637.98 13980.10 10225.94 54402.61 00:13:52.913 00:13:52.913 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:13:52.913 ================================================================================= 00:13:52.913 1.00000% : 10783.651us 00:13:52.913 10.00000% : 11558.167us 00:13:52.913 25.00000% : 12153.949us 00:13:52.913 50.00000% : 12988.044us 00:13:52.913 75.00000% : 14239.185us 00:13:52.913 90.00000% : 18230.924us 00:13:52.913 95.00000% : 20256.582us 00:13:52.913 98.00000% : 23354.647us 00:13:52.913 99.00000% : 40751.476us 00:13:52.913 99.50000% : 51237.236us 00:13:52.913 99.90000% : 53858.676us 00:13:52.913 99.99000% : 54573.615us 00:13:52.913 99.99900% : 54573.615us 00:13:52.913 99.99990% : 54573.615us 00:13:52.913 99.99999% : 54573.615us 00:13:52.913 00:13:52.913 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:13:52.913 ================================================================================= 00:13:52.913 1.00000% : 10902.807us 00:13:52.913 10.00000% : 11677.324us 00:13:52.913 25.00000% : 12213.527us 00:13:52.913 50.00000% : 12988.044us 00:13:52.913 75.00000% : 14239.185us 00:13:52.913 90.00000% : 17873.455us 00:13:52.913 95.00000% : 20018.269us 00:13:52.913 98.00000% : 22878.022us 00:13:52.913 99.00000% : 36938.473us 00:13:52.913 99.50000% : 47662.545us 00:13:52.913 99.90000% : 49807.360us 00:13:52.913 99.99000% : 50522.298us 00:13:52.913 99.99900% : 50522.298us 00:13:52.913 99.99990% : 50522.298us 00:13:52.913 99.99999% : 50522.298us 00:13:52.913 00:13:52.913 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:13:52.913 ================================================================================= 00:13:52.913 1.00000% : 10902.807us 00:13:52.913 10.00000% : 11677.324us 00:13:52.913 25.00000% : 12273.105us 00:13:52.913 50.00000% : 12988.044us 00:13:52.913 75.00000% : 14239.185us 00:13:52.913 90.00000% : 17873.455us 00:13:52.913 95.00000% : 20256.582us 00:13:52.913 98.00000% : 22163.084us 00:13:52.913 99.00000% : 33363.782us 00:13:52.913 99.50000% : 44326.167us 00:13:52.913 99.90000% : 46709.295us 00:13:52.913 99.99000% : 47185.920us 00:13:52.913 99.99900% : 47185.920us 00:13:52.913 99.99990% : 47185.920us 00:13:52.913 99.99999% : 47185.920us 00:13:52.913 00:13:52.913 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:13:52.913 ================================================================================= 00:13:52.913 1.00000% : 10902.807us 00:13:52.913 10.00000% : 11677.324us 00:13:52.913 25.00000% : 12213.527us 00:13:52.913 50.00000% : 12928.465us 00:13:52.913 75.00000% : 14298.764us 00:13:52.913 90.00000% : 17873.455us 00:13:52.913 95.00000% : 20137.425us 00:13:52.913 98.00000% : 22043.927us 00:13:52.913 99.00000% : 29789.091us 00:13:52.913 99.50000% : 40751.476us 00:13:52.913 99.90000% : 42896.291us 00:13:52.913 99.99000% : 43372.916us 00:13:52.913 99.99900% : 43372.916us 00:13:52.913 99.99990% : 43372.916us 00:13:52.913 99.99999% : 43372.916us 00:13:52.913 00:13:52.913 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:13:52.913 ================================================================================= 00:13:52.913 1.00000% : 10843.229us 00:13:52.913 10.00000% : 11677.324us 00:13:52.913 25.00000% : 12213.527us 00:13:52.913 50.00000% : 12928.465us 00:13:52.913 75.00000% : 14239.185us 00:13:52.913 90.00000% : 17635.142us 00:13:52.913 95.00000% : 20137.425us 00:13:52.913 98.00000% : 23950.429us 00:13:52.913 99.00000% : 26095.244us 00:13:52.913 99.50000% : 36938.473us 00:13:52.913 99.90000% : 39321.600us 00:13:52.913 99.99000% : 39798.225us 00:13:52.913 99.99900% : 39798.225us 00:13:52.913 99.99990% : 39798.225us 00:13:52.913 99.99999% : 39798.225us 00:13:52.913 00:13:52.913 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:13:52.913 ================================================================================= 00:13:52.913 1.00000% : 10902.807us 00:13:52.913 10.00000% : 11677.324us 00:13:52.913 25.00000% : 12153.949us 00:13:52.913 50.00000% : 12928.465us 00:13:52.913 75.00000% : 14239.185us 00:13:52.913 90.00000% : 17873.455us 00:13:52.913 95.00000% : 20137.425us 00:13:52.913 98.00000% : 22639.709us 00:13:52.913 99.00000% : 24427.055us 00:13:52.913 99.50000% : 33363.782us 00:13:52.913 99.90000% : 35508.596us 00:13:52.913 99.99000% : 36223.535us 00:13:52.913 99.99900% : 36223.535us 00:13:52.913 99.99990% : 36223.535us 00:13:52.913 99.99999% : 36223.535us 00:13:52.913 00:13:52.913 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:13:52.913 ============================================================================== 00:13:52.913 Range in us Cumulative IO count 00:13:52.913 10187.869 - 10247.447: 0.0110% ( 1) 00:13:52.913 10247.447 - 10307.025: 0.0990% ( 8) 00:13:52.913 10307.025 - 10366.604: 0.1651% ( 6) 00:13:52.913 10366.604 - 10426.182: 0.2201% ( 5) 00:13:52.913 10426.182 - 10485.760: 0.2531% ( 3) 00:13:52.913 10485.760 - 10545.338: 0.3411% ( 8) 00:13:52.913 10545.338 - 10604.916: 0.4952% ( 14) 00:13:52.913 10604.916 - 10664.495: 0.6382% ( 13) 00:13:52.913 10664.495 - 10724.073: 0.8583% ( 20) 00:13:52.913 10724.073 - 10783.651: 1.0893% ( 21) 00:13:52.913 10783.651 - 10843.229: 1.3644% ( 25) 00:13:52.913 10843.229 - 10902.807: 1.6395% ( 25) 00:13:52.913 10902.807 - 10962.385: 2.1017% ( 42) 00:13:52.913 10962.385 - 11021.964: 2.5748% ( 43) 00:13:52.913 11021.964 - 11081.542: 3.0810% ( 46) 00:13:52.913 11081.542 - 11141.120: 3.6422% ( 51) 00:13:52.913 11141.120 - 11200.698: 4.3024% ( 60) 00:13:52.913 11200.698 - 11260.276: 5.0396% ( 67) 00:13:52.913 11260.276 - 11319.855: 5.8429% ( 73) 00:13:52.913 11319.855 - 11379.433: 6.8002% ( 87) 00:13:52.913 11379.433 - 11439.011: 7.8675% ( 97) 00:13:52.913 11439.011 - 11498.589: 9.0889% ( 111) 00:13:52.913 11498.589 - 11558.167: 10.2113% ( 102) 00:13:52.913 11558.167 - 11617.745: 11.5097% ( 118) 00:13:52.913 11617.745 - 11677.324: 12.8961% ( 126) 00:13:52.913 11677.324 - 11736.902: 14.4366% ( 140) 00:13:52.913 11736.902 - 11796.480: 15.8231% ( 126) 00:13:52.913 11796.480 - 11856.058: 17.6276% ( 164) 00:13:52.913 11856.058 - 11915.636: 19.1791% ( 141) 00:13:52.913 11915.636 - 11975.215: 20.9287% ( 159) 00:13:52.913 11975.215 - 12034.793: 22.7443% ( 165) 00:13:52.913 12034.793 - 12094.371: 24.4718% ( 157) 00:13:52.913 12094.371 - 12153.949: 26.3424% ( 170) 00:13:52.914 12153.949 - 12213.527: 28.2680% ( 175) 00:13:52.914 12213.527 - 12273.105: 30.1827% ( 174) 00:13:52.914 12273.105 - 12332.684: 32.0092% ( 166) 00:13:52.914 12332.684 - 12392.262: 33.8798% ( 170) 00:13:52.914 12392.262 - 12451.840: 35.7614% ( 171) 00:13:52.914 12451.840 - 12511.418: 37.6540% ( 172) 00:13:52.914 12511.418 - 12570.996: 39.4586% ( 164) 00:13:52.914 12570.996 - 12630.575: 41.3292% ( 170) 00:13:52.914 12630.575 - 12690.153: 43.1778% ( 168) 00:13:52.914 12690.153 - 12749.731: 44.8504% ( 152) 00:13:52.914 12749.731 - 12809.309: 46.5669% ( 156) 00:13:52.914 12809.309 - 12868.887: 48.2284% ( 151) 00:13:52.914 12868.887 - 12928.465: 49.8790% ( 150) 00:13:52.914 12928.465 - 12988.044: 51.4525% ( 143) 00:13:52.914 12988.044 - 13047.622: 53.1140% ( 151) 00:13:52.914 13047.622 - 13107.200: 54.6325% ( 138) 00:13:52.914 13107.200 - 13166.778: 56.2830% ( 150) 00:13:52.914 13166.778 - 13226.356: 57.8015% ( 138) 00:13:52.914 13226.356 - 13285.935: 59.2210% ( 129) 00:13:52.914 13285.935 - 13345.513: 60.7614% ( 140) 00:13:52.914 13345.513 - 13405.091: 62.1699% ( 128) 00:13:52.914 13405.091 - 13464.669: 63.4793% ( 119) 00:13:52.914 13464.669 - 13524.247: 64.7447% ( 115) 00:13:52.914 13524.247 - 13583.825: 65.9331% ( 108) 00:13:52.914 13583.825 - 13643.404: 67.1215% ( 108) 00:13:52.914 13643.404 - 13702.982: 68.2548% ( 103) 00:13:52.914 13702.982 - 13762.560: 69.2342% ( 89) 00:13:52.914 13762.560 - 13822.138: 70.1585% ( 84) 00:13:52.914 13822.138 - 13881.716: 70.9837% ( 75) 00:13:52.914 13881.716 - 13941.295: 71.7210% ( 67) 00:13:52.914 13941.295 - 14000.873: 72.4142% ( 63) 00:13:52.914 14000.873 - 14060.451: 73.1074% ( 63) 00:13:52.914 14060.451 - 14120.029: 73.7786% ( 61) 00:13:52.914 14120.029 - 14179.607: 74.4718% ( 63) 00:13:52.914 14179.607 - 14239.185: 75.0990% ( 57) 00:13:52.914 14239.185 - 14298.764: 75.6272% ( 48) 00:13:52.914 14298.764 - 14358.342: 76.1664% ( 49) 00:13:52.914 14358.342 - 14417.920: 76.7165% ( 50) 00:13:52.914 14417.920 - 14477.498: 77.1457% ( 39) 00:13:52.914 14477.498 - 14537.076: 77.5968% ( 41) 00:13:52.914 14537.076 - 14596.655: 78.0700% ( 43) 00:13:52.914 14596.655 - 14656.233: 78.4881% ( 38) 00:13:52.914 14656.233 - 14715.811: 78.9062% ( 38) 00:13:52.914 14715.811 - 14775.389: 79.3684% ( 42) 00:13:52.914 14775.389 - 14834.967: 79.7315% ( 33) 00:13:52.914 14834.967 - 14894.545: 80.1166% ( 35) 00:13:52.914 14894.545 - 14954.124: 80.4357% ( 29) 00:13:52.914 14954.124 - 15013.702: 80.7879% ( 32) 00:13:52.914 15013.702 - 15073.280: 81.0629% ( 25) 00:13:52.914 15073.280 - 15132.858: 81.3930% ( 30) 00:13:52.914 15132.858 - 15192.436: 81.7452% ( 32) 00:13:52.914 15192.436 - 15252.015: 82.0423% ( 27) 00:13:52.914 15252.015 - 15371.171: 82.5594% ( 47) 00:13:52.914 15371.171 - 15490.327: 83.1206% ( 51) 00:13:52.914 15490.327 - 15609.484: 83.6598% ( 49) 00:13:52.914 15609.484 - 15728.640: 84.1769% ( 47) 00:13:52.914 15728.640 - 15847.796: 84.6721% ( 45) 00:13:52.914 15847.796 - 15966.953: 85.1452% ( 43) 00:13:52.914 15966.953 - 16086.109: 85.6294% ( 44) 00:13:52.914 16086.109 - 16205.265: 85.9595% ( 30) 00:13:52.914 16205.265 - 16324.422: 86.3116% ( 32) 00:13:52.914 16324.422 - 16443.578: 86.6747% ( 33) 00:13:52.914 16443.578 - 16562.735: 86.9498% ( 25) 00:13:52.914 16562.735 - 16681.891: 87.0929% ( 13) 00:13:52.914 16681.891 - 16801.047: 87.1809% ( 8) 00:13:52.914 16801.047 - 16920.204: 87.3239% ( 13) 00:13:52.914 16920.204 - 17039.360: 87.4780% ( 14) 00:13:52.914 17039.360 - 17158.516: 87.7311% ( 23) 00:13:52.914 17158.516 - 17277.673: 88.0282% ( 27) 00:13:52.914 17277.673 - 17396.829: 88.3583% ( 30) 00:13:52.914 17396.829 - 17515.985: 88.5563% ( 18) 00:13:52.914 17515.985 - 17635.142: 88.8094% ( 23) 00:13:52.914 17635.142 - 17754.298: 89.0625% ( 23) 00:13:52.914 17754.298 - 17873.455: 89.3596% ( 27) 00:13:52.914 17873.455 - 17992.611: 89.6567% ( 27) 00:13:52.914 17992.611 - 18111.767: 89.9868% ( 30) 00:13:52.914 18111.767 - 18230.924: 90.2619% ( 25) 00:13:52.914 18230.924 - 18350.080: 90.5810% ( 29) 00:13:52.914 18350.080 - 18469.236: 90.9001% ( 29) 00:13:52.914 18469.236 - 18588.393: 91.2412% ( 31) 00:13:52.914 18588.393 - 18707.549: 91.6043% ( 33) 00:13:52.914 18707.549 - 18826.705: 91.9344% ( 30) 00:13:52.914 18826.705 - 18945.862: 92.2535% ( 29) 00:13:52.914 18945.862 - 19065.018: 92.5396% ( 26) 00:13:52.914 19065.018 - 19184.175: 92.8367% ( 27) 00:13:52.914 19184.175 - 19303.331: 93.1338% ( 27) 00:13:52.914 19303.331 - 19422.487: 93.4089% ( 25) 00:13:52.914 19422.487 - 19541.644: 93.6510% ( 22) 00:13:52.914 19541.644 - 19660.800: 93.9151% ( 24) 00:13:52.914 19660.800 - 19779.956: 94.1791% ( 24) 00:13:52.914 19779.956 - 19899.113: 94.4762% ( 27) 00:13:52.914 19899.113 - 20018.269: 94.7183% ( 22) 00:13:52.914 20018.269 - 20137.425: 94.9604% ( 22) 00:13:52.914 20137.425 - 20256.582: 95.2135% ( 23) 00:13:52.914 20256.582 - 20375.738: 95.4665% ( 23) 00:13:52.914 20375.738 - 20494.895: 95.6316% ( 15) 00:13:52.914 20494.895 - 20614.051: 95.7636% ( 12) 00:13:52.914 20614.051 - 20733.207: 95.9397% ( 16) 00:13:52.914 20733.207 - 20852.364: 96.0607% ( 11) 00:13:52.914 20852.364 - 20971.520: 96.2258% ( 15) 00:13:52.914 20971.520 - 21090.676: 96.3578% ( 12) 00:13:52.914 21090.676 - 21209.833: 96.5229% ( 15) 00:13:52.914 21209.833 - 21328.989: 96.6549% ( 12) 00:13:52.914 21328.989 - 21448.145: 96.7540% ( 9) 00:13:52.914 21448.145 - 21567.302: 96.8640% ( 10) 00:13:52.914 21567.302 - 21686.458: 96.9520% ( 8) 00:13:52.914 21686.458 - 21805.615: 97.0401% ( 8) 00:13:52.914 21805.615 - 21924.771: 97.1061% ( 6) 00:13:52.914 21924.771 - 22043.927: 97.1831% ( 7) 00:13:52.914 22043.927 - 22163.084: 97.2711% ( 8) 00:13:52.914 22163.084 - 22282.240: 97.3261% ( 5) 00:13:52.914 22282.240 - 22401.396: 97.4142% ( 8) 00:13:52.914 22401.396 - 22520.553: 97.4912% ( 7) 00:13:52.914 22520.553 - 22639.709: 97.5902% ( 9) 00:13:52.914 22639.709 - 22758.865: 97.6452% ( 5) 00:13:52.914 22758.865 - 22878.022: 97.7333% ( 8) 00:13:52.914 22878.022 - 22997.178: 97.8103% ( 7) 00:13:52.914 22997.178 - 23116.335: 97.8763% ( 6) 00:13:52.914 23116.335 - 23235.491: 97.9423% ( 6) 00:13:52.914 23235.491 - 23354.647: 98.0304% ( 8) 00:13:52.914 23354.647 - 23473.804: 98.1294% ( 9) 00:13:52.914 23473.804 - 23592.960: 98.1954% ( 6) 00:13:52.914 23592.960 - 23712.116: 98.2284% ( 3) 00:13:52.914 23712.116 - 23831.273: 98.2504% ( 2) 00:13:52.914 23831.273 - 23950.429: 98.2945% ( 4) 00:13:52.914 23950.429 - 24069.585: 98.3275% ( 3) 00:13:52.914 24069.585 - 24188.742: 98.3605% ( 3) 00:13:52.914 24188.742 - 24307.898: 98.3935% ( 3) 00:13:52.914 24307.898 - 24427.055: 98.4155% ( 2) 00:13:52.914 24427.055 - 24546.211: 98.4485% ( 3) 00:13:52.914 24546.211 - 24665.367: 98.4815% ( 3) 00:13:52.914 24665.367 - 24784.524: 98.5255% ( 4) 00:13:52.914 24784.524 - 24903.680: 98.5475% ( 2) 00:13:52.914 24903.680 - 25022.836: 98.5805% ( 3) 00:13:52.914 25022.836 - 25141.993: 98.5915% ( 1) 00:13:52.914 37891.724 - 38130.036: 98.6246% ( 3) 00:13:52.914 38130.036 - 38368.349: 98.6576% ( 3) 00:13:52.914 38368.349 - 38606.662: 98.6906% ( 3) 00:13:52.914 38606.662 - 38844.975: 98.7346% ( 4) 00:13:52.914 38844.975 - 39083.287: 98.7676% ( 3) 00:13:52.914 39083.287 - 39321.600: 98.8116% ( 4) 00:13:52.914 39321.600 - 39559.913: 98.8446% ( 3) 00:13:52.914 39559.913 - 39798.225: 98.8886% ( 4) 00:13:52.914 39798.225 - 40036.538: 98.9217% ( 3) 00:13:52.914 40036.538 - 40274.851: 98.9547% ( 3) 00:13:52.914 40274.851 - 40513.164: 98.9987% ( 4) 00:13:52.914 40513.164 - 40751.476: 99.0427% ( 4) 00:13:52.914 40751.476 - 40989.789: 99.0757% ( 3) 00:13:52.914 40989.789 - 41228.102: 99.1087% ( 3) 00:13:52.914 41228.102 - 41466.415: 99.1417% ( 3) 00:13:52.914 41466.415 - 41704.727: 99.1857% ( 4) 00:13:52.914 41704.727 - 41943.040: 99.2298% ( 4) 00:13:52.914 41943.040 - 42181.353: 99.2628% ( 3) 00:13:52.914 42181.353 - 42419.665: 99.2958% ( 3) 00:13:52.915 49807.360 - 50045.673: 99.3178% ( 2) 00:13:52.915 50045.673 - 50283.985: 99.3508% ( 3) 00:13:52.915 50283.985 - 50522.298: 99.3948% ( 4) 00:13:52.915 50522.298 - 50760.611: 99.4278% ( 3) 00:13:52.915 50760.611 - 50998.924: 99.4718% ( 4) 00:13:52.915 50998.924 - 51237.236: 99.5048% ( 3) 00:13:52.915 51237.236 - 51475.549: 99.5489% ( 4) 00:13:52.915 51475.549 - 51713.862: 99.5819% ( 3) 00:13:52.915 51713.862 - 51952.175: 99.6149% ( 3) 00:13:52.915 51952.175 - 52190.487: 99.6479% ( 3) 00:13:52.915 52190.487 - 52428.800: 99.6919% ( 4) 00:13:52.915 52428.800 - 52667.113: 99.7249% ( 3) 00:13:52.915 52667.113 - 52905.425: 99.7689% ( 4) 00:13:52.915 52905.425 - 53143.738: 99.7909% ( 2) 00:13:52.915 53143.738 - 53382.051: 99.8349% ( 4) 00:13:52.915 53382.051 - 53620.364: 99.8790% ( 4) 00:13:52.915 53620.364 - 53858.676: 99.9120% ( 3) 00:13:52.915 53858.676 - 54096.989: 99.9450% ( 3) 00:13:52.915 54096.989 - 54335.302: 99.9890% ( 4) 00:13:52.915 54335.302 - 54573.615: 100.0000% ( 1) 00:13:52.915 00:13:52.915 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:13:52.915 ============================================================================== 00:13:52.915 Range in us Cumulative IO count 00:13:52.915 10307.025 - 10366.604: 0.0440% ( 4) 00:13:52.915 10366.604 - 10426.182: 0.0770% ( 3) 00:13:52.915 10426.182 - 10485.760: 0.1761% ( 9) 00:13:52.915 10485.760 - 10545.338: 0.2641% ( 8) 00:13:52.915 10545.338 - 10604.916: 0.3301% ( 6) 00:13:52.915 10604.916 - 10664.495: 0.4291% ( 9) 00:13:52.915 10664.495 - 10724.073: 0.5502% ( 11) 00:13:52.915 10724.073 - 10783.651: 0.6822% ( 12) 00:13:52.915 10783.651 - 10843.229: 0.9353% ( 23) 00:13:52.915 10843.229 - 10902.807: 1.1884% ( 23) 00:13:52.915 10902.807 - 10962.385: 1.4305% ( 22) 00:13:52.915 10962.385 - 11021.964: 1.6835% ( 23) 00:13:52.915 11021.964 - 11081.542: 2.0797% ( 36) 00:13:52.915 11081.542 - 11141.120: 2.5198% ( 40) 00:13:52.915 11141.120 - 11200.698: 2.9820% ( 42) 00:13:52.915 11200.698 - 11260.276: 3.5541% ( 52) 00:13:52.915 11260.276 - 11319.855: 4.2254% ( 61) 00:13:52.915 11319.855 - 11379.433: 5.0946% ( 79) 00:13:52.915 11379.433 - 11439.011: 5.8759% ( 71) 00:13:52.915 11439.011 - 11498.589: 6.8442% ( 88) 00:13:52.915 11498.589 - 11558.167: 7.9445% ( 100) 00:13:52.915 11558.167 - 11617.745: 9.3090% ( 124) 00:13:52.915 11617.745 - 11677.324: 10.5304% ( 111) 00:13:52.915 11677.324 - 11736.902: 11.7518% ( 111) 00:13:52.915 11736.902 - 11796.480: 13.0612% ( 119) 00:13:52.915 11796.480 - 11856.058: 14.4696% ( 128) 00:13:52.915 11856.058 - 11915.636: 16.1422% ( 152) 00:13:52.915 11915.636 - 11975.215: 17.9247% ( 162) 00:13:52.915 11975.215 - 12034.793: 19.8504% ( 175) 00:13:52.915 12034.793 - 12094.371: 21.7870% ( 176) 00:13:52.915 12094.371 - 12153.949: 23.7786% ( 181) 00:13:52.915 12153.949 - 12213.527: 25.7482% ( 179) 00:13:52.915 12213.527 - 12273.105: 27.7839% ( 185) 00:13:52.915 12273.105 - 12332.684: 29.8085% ( 184) 00:13:52.915 12332.684 - 12392.262: 31.7562% ( 177) 00:13:52.915 12392.262 - 12451.840: 33.8358% ( 189) 00:13:52.915 12451.840 - 12511.418: 36.0145% ( 198) 00:13:52.915 12511.418 - 12570.996: 38.1602% ( 195) 00:13:52.915 12570.996 - 12630.575: 40.2069% ( 186) 00:13:52.915 12630.575 - 12690.153: 42.0995% ( 172) 00:13:52.915 12690.153 - 12749.731: 43.8160% ( 156) 00:13:52.915 12749.731 - 12809.309: 45.6976% ( 171) 00:13:52.915 12809.309 - 12868.887: 47.4802% ( 162) 00:13:52.915 12868.887 - 12928.465: 49.1417% ( 151) 00:13:52.915 12928.465 - 12988.044: 50.9463% ( 164) 00:13:52.915 12988.044 - 13047.622: 52.6078% ( 151) 00:13:52.915 13047.622 - 13107.200: 54.3354% ( 157) 00:13:52.915 13107.200 - 13166.778: 55.9199% ( 144) 00:13:52.915 13166.778 - 13226.356: 57.5264% ( 146) 00:13:52.915 13226.356 - 13285.935: 59.1769% ( 150) 00:13:52.915 13285.935 - 13345.513: 60.7064% ( 139) 00:13:52.915 13345.513 - 13405.091: 62.2139% ( 137) 00:13:52.915 13405.091 - 13464.669: 63.5233% ( 119) 00:13:52.915 13464.669 - 13524.247: 64.7997% ( 116) 00:13:52.915 13524.247 - 13583.825: 65.9221% ( 102) 00:13:52.915 13583.825 - 13643.404: 67.0114% ( 99) 00:13:52.915 13643.404 - 13702.982: 68.0018% ( 90) 00:13:52.915 13702.982 - 13762.560: 69.0251% ( 93) 00:13:52.915 13762.560 - 13822.138: 70.0044% ( 89) 00:13:52.915 13822.138 - 13881.716: 70.8847% ( 80) 00:13:52.915 13881.716 - 13941.295: 71.7870% ( 82) 00:13:52.915 13941.295 - 14000.873: 72.5682% ( 71) 00:13:52.915 14000.873 - 14060.451: 73.3935% ( 75) 00:13:52.915 14060.451 - 14120.029: 74.1087% ( 65) 00:13:52.915 14120.029 - 14179.607: 74.7579% ( 59) 00:13:52.915 14179.607 - 14239.185: 75.3301% ( 52) 00:13:52.915 14239.185 - 14298.764: 75.8473% ( 47) 00:13:52.915 14298.764 - 14358.342: 76.3754% ( 48) 00:13:52.915 14358.342 - 14417.920: 76.8486% ( 43) 00:13:52.915 14417.920 - 14477.498: 77.3107% ( 42) 00:13:52.915 14477.498 - 14537.076: 77.7289% ( 38) 00:13:52.915 14537.076 - 14596.655: 78.1580% ( 39) 00:13:52.915 14596.655 - 14656.233: 78.5321% ( 34) 00:13:52.915 14656.233 - 14715.811: 78.9613% ( 39) 00:13:52.915 14715.811 - 14775.389: 79.3794% ( 38) 00:13:52.915 14775.389 - 14834.967: 79.7315% ( 32) 00:13:52.915 14834.967 - 14894.545: 80.0286% ( 27) 00:13:52.915 14894.545 - 14954.124: 80.4688% ( 40) 00:13:52.915 14954.124 - 15013.702: 80.9089% ( 40) 00:13:52.915 15013.702 - 15073.280: 81.3710% ( 42) 00:13:52.915 15073.280 - 15132.858: 81.7562% ( 35) 00:13:52.915 15132.858 - 15192.436: 82.0753% ( 29) 00:13:52.915 15192.436 - 15252.015: 82.4934% ( 38) 00:13:52.915 15252.015 - 15371.171: 83.2196% ( 66) 00:13:52.915 15371.171 - 15490.327: 83.9899% ( 70) 00:13:52.915 15490.327 - 15609.484: 84.5951% ( 55) 00:13:52.915 15609.484 - 15728.640: 85.2003% ( 55) 00:13:52.915 15728.640 - 15847.796: 85.7174% ( 47) 00:13:52.915 15847.796 - 15966.953: 86.2346% ( 47) 00:13:52.915 15966.953 - 16086.109: 86.6197% ( 35) 00:13:52.915 16086.109 - 16205.265: 87.0158% ( 36) 00:13:52.915 16205.265 - 16324.422: 87.3790% ( 33) 00:13:52.915 16324.422 - 16443.578: 87.6651% ( 26) 00:13:52.915 16443.578 - 16562.735: 87.8961% ( 21) 00:13:52.915 16562.735 - 16681.891: 88.0392% ( 13) 00:13:52.915 16681.891 - 16801.047: 88.1822% ( 13) 00:13:52.915 16801.047 - 16920.204: 88.2923% ( 10) 00:13:52.915 16920.204 - 17039.360: 88.4683% ( 16) 00:13:52.915 17039.360 - 17158.516: 88.6444% ( 16) 00:13:52.915 17158.516 - 17277.673: 88.8314% ( 17) 00:13:52.915 17277.673 - 17396.829: 89.1505% ( 29) 00:13:52.915 17396.829 - 17515.985: 89.4806% ( 30) 00:13:52.915 17515.985 - 17635.142: 89.7227% ( 22) 00:13:52.915 17635.142 - 17754.298: 89.8988% ( 16) 00:13:52.915 17754.298 - 17873.455: 90.3169% ( 38) 00:13:52.915 17873.455 - 17992.611: 90.6360% ( 29) 00:13:52.915 17992.611 - 18111.767: 90.8451% ( 19) 00:13:52.915 18111.767 - 18230.924: 91.0871% ( 22) 00:13:52.915 18230.924 - 18350.080: 91.3512% ( 24) 00:13:52.915 18350.080 - 18469.236: 91.6153% ( 24) 00:13:52.915 18469.236 - 18588.393: 91.8574% ( 22) 00:13:52.915 18588.393 - 18707.549: 92.1215% ( 24) 00:13:52.915 18707.549 - 18826.705: 92.4406% ( 29) 00:13:52.915 18826.705 - 18945.862: 92.7267% ( 26) 00:13:52.915 18945.862 - 19065.018: 92.9908% ( 24) 00:13:52.915 19065.018 - 19184.175: 93.2879% ( 27) 00:13:52.915 19184.175 - 19303.331: 93.5849% ( 27) 00:13:52.915 19303.331 - 19422.487: 93.8600% ( 25) 00:13:52.915 19422.487 - 19541.644: 94.1241% ( 24) 00:13:52.915 19541.644 - 19660.800: 94.3662% ( 22) 00:13:52.915 19660.800 - 19779.956: 94.6303% ( 24) 00:13:52.915 19779.956 - 19899.113: 94.8834% ( 23) 00:13:52.915 19899.113 - 20018.269: 95.1034% ( 20) 00:13:52.915 20018.269 - 20137.425: 95.2685% ( 15) 00:13:52.915 20137.425 - 20256.582: 95.3895% ( 11) 00:13:52.915 20256.582 - 20375.738: 95.5106% ( 11) 00:13:52.915 20375.738 - 20494.895: 95.6206% ( 10) 00:13:52.915 20494.895 - 20614.051: 95.7416% ( 11) 00:13:52.915 20614.051 - 20733.207: 95.8517% ( 10) 00:13:52.915 20733.207 - 20852.364: 96.0387% ( 17) 00:13:52.915 20852.364 - 20971.520: 96.2478% ( 19) 00:13:52.915 20971.520 - 21090.676: 96.4679% ( 20) 00:13:52.915 21090.676 - 21209.833: 96.6439% ( 16) 00:13:52.915 21209.833 - 21328.989: 96.7870% ( 13) 00:13:52.915 21328.989 - 21448.145: 96.8970% ( 10) 00:13:52.915 21448.145 - 21567.302: 97.0070% ( 10) 00:13:52.915 21567.302 - 21686.458: 97.0841% ( 7) 00:13:52.915 21686.458 - 21805.615: 97.1721% ( 8) 00:13:52.915 21805.615 - 21924.771: 97.2161% ( 4) 00:13:52.915 21924.771 - 22043.927: 97.2711% ( 5) 00:13:52.915 22043.927 - 22163.084: 97.3261% ( 5) 00:13:52.915 22163.084 - 22282.240: 97.3702% ( 4) 00:13:52.915 22282.240 - 22401.396: 97.4252% ( 5) 00:13:52.915 22401.396 - 22520.553: 97.6122% ( 17) 00:13:52.915 22520.553 - 22639.709: 97.7883% ( 16) 00:13:52.915 22639.709 - 22758.865: 97.9423% ( 14) 00:13:52.915 22758.865 - 22878.022: 98.0524% ( 10) 00:13:52.915 22878.022 - 22997.178: 98.1404% ( 8) 00:13:52.915 22997.178 - 23116.335: 98.2394% ( 9) 00:13:52.915 23116.335 - 23235.491: 98.3275% ( 8) 00:13:52.915 23235.491 - 23354.647: 98.3825% ( 5) 00:13:52.915 23354.647 - 23473.804: 98.4155% ( 3) 00:13:52.915 23473.804 - 23592.960: 98.4485% ( 3) 00:13:52.916 23592.960 - 23712.116: 98.4925% ( 4) 00:13:52.916 23712.116 - 23831.273: 98.5145% ( 2) 00:13:52.916 23831.273 - 23950.429: 98.5585% ( 4) 00:13:52.916 23950.429 - 24069.585: 98.5915% ( 3) 00:13:52.916 35031.971 - 35270.284: 98.6246% ( 3) 00:13:52.916 35270.284 - 35508.596: 98.7346% ( 10) 00:13:52.916 35508.596 - 35746.909: 98.8336% ( 9) 00:13:52.916 35746.909 - 35985.222: 98.8666% ( 3) 00:13:52.916 35985.222 - 36223.535: 98.8996% ( 3) 00:13:52.916 36223.535 - 36461.847: 98.9437% ( 4) 00:13:52.916 36461.847 - 36700.160: 98.9767% ( 3) 00:13:52.916 36700.160 - 36938.473: 99.0097% ( 3) 00:13:52.916 36938.473 - 37176.785: 99.0537% ( 4) 00:13:52.916 37415.098 - 37653.411: 99.0647% ( 1) 00:13:52.916 37653.411 - 37891.724: 99.0867% ( 2) 00:13:52.916 37891.724 - 38130.036: 99.1527% ( 6) 00:13:52.916 38130.036 - 38368.349: 99.1967% ( 4) 00:13:52.916 38368.349 - 38606.662: 99.2408% ( 4) 00:13:52.916 38606.662 - 38844.975: 99.2738% ( 3) 00:13:52.916 38844.975 - 39083.287: 99.2958% ( 2) 00:13:52.916 45994.356 - 46232.669: 99.3508% ( 5) 00:13:52.916 46232.669 - 46470.982: 99.3728% ( 2) 00:13:52.916 46470.982 - 46709.295: 99.4058% ( 3) 00:13:52.916 46709.295 - 46947.607: 99.4278% ( 2) 00:13:52.916 46947.607 - 47185.920: 99.4608% ( 3) 00:13:52.916 47185.920 - 47424.233: 99.4938% ( 3) 00:13:52.916 47424.233 - 47662.545: 99.5379% ( 4) 00:13:52.916 47662.545 - 47900.858: 99.5819% ( 4) 00:13:52.916 47900.858 - 48139.171: 99.6149% ( 3) 00:13:52.916 48139.171 - 48377.484: 99.6589% ( 4) 00:13:52.916 48377.484 - 48615.796: 99.7029% ( 4) 00:13:52.916 48615.796 - 48854.109: 99.7469% ( 4) 00:13:52.916 48854.109 - 49092.422: 99.7799% ( 3) 00:13:52.916 49092.422 - 49330.735: 99.8239% ( 4) 00:13:52.916 49330.735 - 49569.047: 99.8570% ( 3) 00:13:52.916 49569.047 - 49807.360: 99.9010% ( 4) 00:13:52.916 49807.360 - 50045.673: 99.9450% ( 4) 00:13:52.916 50045.673 - 50283.985: 99.9890% ( 4) 00:13:52.916 50283.985 - 50522.298: 100.0000% ( 1) 00:13:52.916 00:13:52.916 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:13:52.916 ============================================================================== 00:13:52.916 Range in us Cumulative IO count 00:13:52.916 10366.604 - 10426.182: 0.0110% ( 1) 00:13:52.916 10426.182 - 10485.760: 0.0660% ( 5) 00:13:52.916 10485.760 - 10545.338: 0.0990% ( 3) 00:13:52.916 10545.338 - 10604.916: 0.1320% ( 3) 00:13:52.916 10604.916 - 10664.495: 0.2091% ( 7) 00:13:52.916 10664.495 - 10724.073: 0.3741% ( 15) 00:13:52.916 10724.073 - 10783.651: 0.5722% ( 18) 00:13:52.916 10783.651 - 10843.229: 0.8143% ( 22) 00:13:52.916 10843.229 - 10902.807: 1.1114% ( 27) 00:13:52.916 10902.807 - 10962.385: 1.5625% ( 41) 00:13:52.916 10962.385 - 11021.964: 1.9586% ( 36) 00:13:52.916 11021.964 - 11081.542: 2.4428% ( 44) 00:13:52.916 11081.542 - 11141.120: 2.9599% ( 47) 00:13:52.916 11141.120 - 11200.698: 3.5651% ( 55) 00:13:52.916 11200.698 - 11260.276: 4.3134% ( 68) 00:13:52.916 11260.276 - 11319.855: 5.0506% ( 67) 00:13:52.916 11319.855 - 11379.433: 5.7768% ( 66) 00:13:52.916 11379.433 - 11439.011: 6.6131% ( 76) 00:13:52.916 11439.011 - 11498.589: 7.3393% ( 66) 00:13:52.916 11498.589 - 11558.167: 8.3517% ( 92) 00:13:52.916 11558.167 - 11617.745: 9.3420% ( 90) 00:13:52.916 11617.745 - 11677.324: 10.4974% ( 105) 00:13:52.916 11677.324 - 11736.902: 11.6967% ( 109) 00:13:52.916 11736.902 - 11796.480: 13.1712% ( 134) 00:13:52.916 11796.480 - 11856.058: 14.6127% ( 131) 00:13:52.916 11856.058 - 11915.636: 16.0651% ( 132) 00:13:52.916 11915.636 - 11975.215: 17.6386% ( 143) 00:13:52.916 11975.215 - 12034.793: 19.3442% ( 155) 00:13:52.916 12034.793 - 12094.371: 21.2368% ( 172) 00:13:52.916 12094.371 - 12153.949: 23.1514% ( 174) 00:13:52.916 12153.949 - 12213.527: 24.9340% ( 162) 00:13:52.916 12213.527 - 12273.105: 26.9476% ( 183) 00:13:52.916 12273.105 - 12332.684: 29.1483% ( 200) 00:13:52.916 12332.684 - 12392.262: 31.4481% ( 209) 00:13:52.916 12392.262 - 12451.840: 33.5827% ( 194) 00:13:52.916 12451.840 - 12511.418: 35.6404% ( 187) 00:13:52.916 12511.418 - 12570.996: 37.8631% ( 202) 00:13:52.916 12570.996 - 12630.575: 39.9318% ( 188) 00:13:52.916 12630.575 - 12690.153: 41.9344% ( 182) 00:13:52.916 12690.153 - 12749.731: 43.8710% ( 176) 00:13:52.916 12749.731 - 12809.309: 45.7306% ( 169) 00:13:52.916 12809.309 - 12868.887: 47.7223% ( 181) 00:13:52.916 12868.887 - 12928.465: 49.5048% ( 162) 00:13:52.916 12928.465 - 12988.044: 51.3534% ( 168) 00:13:52.916 12988.044 - 13047.622: 53.1910% ( 167) 00:13:52.916 13047.622 - 13107.200: 54.9846% ( 163) 00:13:52.916 13107.200 - 13166.778: 56.7232% ( 158) 00:13:52.916 13166.778 - 13226.356: 58.4617% ( 158) 00:13:52.916 13226.356 - 13285.935: 60.0352% ( 143) 00:13:52.916 13285.935 - 13345.513: 61.4657% ( 130) 00:13:52.916 13345.513 - 13405.091: 62.9071% ( 131) 00:13:52.916 13405.091 - 13464.669: 64.3156% ( 128) 00:13:52.916 13464.669 - 13524.247: 65.6250% ( 119) 00:13:52.916 13524.247 - 13583.825: 66.6923% ( 97) 00:13:52.916 13583.825 - 13643.404: 67.5726% ( 80) 00:13:52.916 13643.404 - 13702.982: 68.3979% ( 75) 00:13:52.916 13702.982 - 13762.560: 69.2232% ( 75) 00:13:52.916 13762.560 - 13822.138: 70.0264% ( 73) 00:13:52.916 13822.138 - 13881.716: 70.8957% ( 79) 00:13:52.916 13881.716 - 13941.295: 71.7430% ( 77) 00:13:52.916 13941.295 - 14000.873: 72.5352% ( 72) 00:13:52.916 14000.873 - 14060.451: 73.2174% ( 62) 00:13:52.916 14060.451 - 14120.029: 73.8116% ( 54) 00:13:52.916 14120.029 - 14179.607: 74.4828% ( 61) 00:13:52.916 14179.607 - 14239.185: 75.0880% ( 55) 00:13:52.916 14239.185 - 14298.764: 75.7372% ( 59) 00:13:52.916 14298.764 - 14358.342: 76.2654% ( 48) 00:13:52.916 14358.342 - 14417.920: 76.7606% ( 45) 00:13:52.916 14417.920 - 14477.498: 77.2337% ( 43) 00:13:52.916 14477.498 - 14537.076: 77.7069% ( 43) 00:13:52.916 14537.076 - 14596.655: 78.1690% ( 42) 00:13:52.916 14596.655 - 14656.233: 78.6202% ( 41) 00:13:52.916 14656.233 - 14715.811: 79.1373% ( 47) 00:13:52.916 14715.811 - 14775.389: 79.6435% ( 46) 00:13:52.916 14775.389 - 14834.967: 80.1386% ( 45) 00:13:52.916 14834.967 - 14894.545: 80.5898% ( 41) 00:13:52.916 14894.545 - 14954.124: 81.0409% ( 41) 00:13:52.916 14954.124 - 15013.702: 81.4040% ( 33) 00:13:52.916 15013.702 - 15073.280: 81.7672% ( 33) 00:13:52.916 15073.280 - 15132.858: 82.1193% ( 32) 00:13:52.916 15132.858 - 15192.436: 82.4604% ( 31) 00:13:52.916 15192.436 - 15252.015: 82.7245% ( 24) 00:13:52.916 15252.015 - 15371.171: 83.2196% ( 45) 00:13:52.916 15371.171 - 15490.327: 83.8358% ( 56) 00:13:52.916 15490.327 - 15609.484: 84.4080% ( 52) 00:13:52.916 15609.484 - 15728.640: 84.9912% ( 53) 00:13:52.916 15728.640 - 15847.796: 85.5854% ( 54) 00:13:52.916 15847.796 - 15966.953: 86.0695% ( 44) 00:13:52.916 15966.953 - 16086.109: 86.4987% ( 39) 00:13:52.916 16086.109 - 16205.265: 86.9938% ( 45) 00:13:52.916 16205.265 - 16324.422: 87.4230% ( 39) 00:13:52.916 16324.422 - 16443.578: 87.7641% ( 31) 00:13:52.916 16443.578 - 16562.735: 88.0502% ( 26) 00:13:52.916 16562.735 - 16681.891: 88.2042% ( 14) 00:13:52.916 16681.891 - 16801.047: 88.3143% ( 10) 00:13:52.916 16801.047 - 16920.204: 88.4353% ( 11) 00:13:52.916 16920.204 - 17039.360: 88.5453% ( 10) 00:13:52.916 17039.360 - 17158.516: 88.6554% ( 10) 00:13:52.916 17158.516 - 17277.673: 88.7764% ( 11) 00:13:52.916 17277.673 - 17396.829: 88.9195% ( 13) 00:13:52.916 17396.829 - 17515.985: 89.1725% ( 23) 00:13:52.916 17515.985 - 17635.142: 89.5357% ( 33) 00:13:52.916 17635.142 - 17754.298: 89.8438% ( 28) 00:13:52.916 17754.298 - 17873.455: 90.0968% ( 23) 00:13:52.916 17873.455 - 17992.611: 90.3499% ( 23) 00:13:52.916 17992.611 - 18111.767: 90.6030% ( 23) 00:13:52.916 18111.767 - 18230.924: 90.8341% ( 21) 00:13:52.916 18230.924 - 18350.080: 91.0761% ( 22) 00:13:52.916 18350.080 - 18469.236: 91.3292% ( 23) 00:13:52.916 18469.236 - 18588.393: 91.5713% ( 22) 00:13:52.916 18588.393 - 18707.549: 91.8024% ( 21) 00:13:52.916 18707.549 - 18826.705: 92.0114% ( 19) 00:13:52.916 18826.705 - 18945.862: 92.2535% ( 22) 00:13:52.916 18945.862 - 19065.018: 92.4846% ( 21) 00:13:52.916 19065.018 - 19184.175: 92.7157% ( 21) 00:13:52.916 19184.175 - 19303.331: 92.9688% ( 23) 00:13:52.916 19303.331 - 19422.487: 93.2328% ( 24) 00:13:52.916 19422.487 - 19541.644: 93.5189% ( 26) 00:13:52.916 19541.644 - 19660.800: 93.8490% ( 30) 00:13:52.916 19660.800 - 19779.956: 94.1571% ( 28) 00:13:52.916 19779.956 - 19899.113: 94.4652% ( 28) 00:13:52.916 19899.113 - 20018.269: 94.6743% ( 19) 00:13:52.917 20018.269 - 20137.425: 94.8504% ( 16) 00:13:52.917 20137.425 - 20256.582: 95.0484% ( 18) 00:13:52.917 20256.582 - 20375.738: 95.3785% ( 30) 00:13:52.917 20375.738 - 20494.895: 95.5656% ( 17) 00:13:52.917 20494.895 - 20614.051: 95.7746% ( 19) 00:13:52.917 20614.051 - 20733.207: 95.9507% ( 16) 00:13:52.917 20733.207 - 20852.364: 96.2588% ( 28) 00:13:52.917 20852.364 - 20971.520: 96.5559% ( 27) 00:13:52.917 20971.520 - 21090.676: 96.8530% ( 27) 00:13:52.917 21090.676 - 21209.833: 97.0951% ( 22) 00:13:52.917 21209.833 - 21328.989: 97.3041% ( 19) 00:13:52.917 21328.989 - 21448.145: 97.4472% ( 13) 00:13:52.917 21448.145 - 21567.302: 97.6122% ( 15) 00:13:52.917 21567.302 - 21686.458: 97.6893% ( 7) 00:13:52.917 21686.458 - 21805.615: 97.7773% ( 8) 00:13:52.917 21805.615 - 21924.771: 97.8653% ( 8) 00:13:52.917 21924.771 - 22043.927: 97.9643% ( 9) 00:13:52.917 22043.927 - 22163.084: 98.0414% ( 7) 00:13:52.917 22163.084 - 22282.240: 98.0964% ( 5) 00:13:52.917 22282.240 - 22401.396: 98.1514% ( 5) 00:13:52.917 22401.396 - 22520.553: 98.2174% ( 6) 00:13:52.917 22520.553 - 22639.709: 98.2724% ( 5) 00:13:52.917 22639.709 - 22758.865: 98.3385% ( 6) 00:13:52.917 22758.865 - 22878.022: 98.4045% ( 6) 00:13:52.917 22878.022 - 22997.178: 98.4485% ( 4) 00:13:52.917 22997.178 - 23116.335: 98.5145% ( 6) 00:13:52.917 23116.335 - 23235.491: 98.5695% ( 5) 00:13:52.917 23235.491 - 23354.647: 98.5915% ( 2) 00:13:52.917 30980.655 - 31218.967: 98.6246% ( 3) 00:13:52.917 31218.967 - 31457.280: 98.6686% ( 4) 00:13:52.917 31457.280 - 31695.593: 98.7126% ( 4) 00:13:52.917 31695.593 - 31933.905: 98.7566% ( 4) 00:13:52.917 31933.905 - 32172.218: 98.8006% ( 4) 00:13:52.917 32172.218 - 32410.531: 98.8446% ( 4) 00:13:52.917 32410.531 - 32648.844: 98.8886% ( 4) 00:13:52.917 32648.844 - 32887.156: 98.9217% ( 3) 00:13:52.917 32887.156 - 33125.469: 98.9657% ( 4) 00:13:52.917 33125.469 - 33363.782: 99.0097% ( 4) 00:13:52.917 33363.782 - 33602.095: 99.0537% ( 4) 00:13:52.917 33602.095 - 33840.407: 99.0977% ( 4) 00:13:52.917 33840.407 - 34078.720: 99.1417% ( 4) 00:13:52.917 34078.720 - 34317.033: 99.1747% ( 3) 00:13:52.917 34317.033 - 34555.345: 99.2188% ( 4) 00:13:52.917 34555.345 - 34793.658: 99.2628% ( 4) 00:13:52.917 34793.658 - 35031.971: 99.2958% ( 3) 00:13:52.917 42896.291 - 43134.604: 99.3178% ( 2) 00:13:52.917 43134.604 - 43372.916: 99.3508% ( 3) 00:13:52.917 43372.916 - 43611.229: 99.3948% ( 4) 00:13:52.917 43611.229 - 43849.542: 99.4388% ( 4) 00:13:52.917 43849.542 - 44087.855: 99.4718% ( 3) 00:13:52.917 44087.855 - 44326.167: 99.5158% ( 4) 00:13:52.917 44326.167 - 44564.480: 99.5599% ( 4) 00:13:52.917 44564.480 - 44802.793: 99.5929% ( 3) 00:13:52.917 44802.793 - 45041.105: 99.6369% ( 4) 00:13:52.917 45041.105 - 45279.418: 99.6809% ( 4) 00:13:52.917 45279.418 - 45517.731: 99.7249% ( 4) 00:13:52.917 45517.731 - 45756.044: 99.7579% ( 3) 00:13:52.917 45756.044 - 45994.356: 99.8019% ( 4) 00:13:52.917 45994.356 - 46232.669: 99.8460% ( 4) 00:13:52.917 46232.669 - 46470.982: 99.8900% ( 4) 00:13:52.917 46470.982 - 46709.295: 99.9340% ( 4) 00:13:52.917 46709.295 - 46947.607: 99.9780% ( 4) 00:13:52.917 46947.607 - 47185.920: 100.0000% ( 2) 00:13:52.917 00:13:52.917 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:13:52.917 ============================================================================== 00:13:52.917 Range in us Cumulative IO count 00:13:52.917 10545.338 - 10604.916: 0.0440% ( 4) 00:13:52.917 10604.916 - 10664.495: 0.1651% ( 11) 00:13:52.917 10664.495 - 10724.073: 0.4291% ( 24) 00:13:52.917 10724.073 - 10783.651: 0.7042% ( 25) 00:13:52.917 10783.651 - 10843.229: 0.9243% ( 20) 00:13:52.917 10843.229 - 10902.807: 1.2984% ( 34) 00:13:52.917 10902.807 - 10962.385: 1.7165% ( 38) 00:13:52.917 10962.385 - 11021.964: 2.1567% ( 40) 00:13:52.917 11021.964 - 11081.542: 2.5858% ( 39) 00:13:52.917 11081.542 - 11141.120: 3.0480% ( 42) 00:13:52.917 11141.120 - 11200.698: 3.7632% ( 65) 00:13:52.917 11200.698 - 11260.276: 4.3574% ( 54) 00:13:52.917 11260.276 - 11319.855: 5.2157% ( 78) 00:13:52.917 11319.855 - 11379.433: 6.0189% ( 73) 00:13:52.917 11379.433 - 11439.011: 6.9102% ( 81) 00:13:52.917 11439.011 - 11498.589: 7.8015% ( 81) 00:13:52.917 11498.589 - 11558.167: 8.7698% ( 88) 00:13:52.917 11558.167 - 11617.745: 9.8702% ( 100) 00:13:52.917 11617.745 - 11677.324: 11.1246% ( 114) 00:13:52.917 11677.324 - 11736.902: 12.4780% ( 123) 00:13:52.917 11736.902 - 11796.480: 14.1175% ( 149) 00:13:52.917 11796.480 - 11856.058: 15.6360% ( 138) 00:13:52.917 11856.058 - 11915.636: 17.0885% ( 132) 00:13:52.917 11915.636 - 11975.215: 18.6620% ( 143) 00:13:52.917 11975.215 - 12034.793: 20.2135% ( 141) 00:13:52.917 12034.793 - 12094.371: 21.9960% ( 162) 00:13:52.917 12094.371 - 12153.949: 23.9877% ( 181) 00:13:52.917 12153.949 - 12213.527: 26.1774% ( 199) 00:13:52.917 12213.527 - 12273.105: 28.4331% ( 205) 00:13:52.917 12273.105 - 12332.684: 30.4357% ( 182) 00:13:52.917 12332.684 - 12392.262: 32.3944% ( 178) 00:13:52.917 12392.262 - 12451.840: 34.4080% ( 183) 00:13:52.917 12451.840 - 12511.418: 36.4547% ( 186) 00:13:52.917 12511.418 - 12570.996: 38.6774% ( 202) 00:13:52.917 12570.996 - 12630.575: 40.7570% ( 189) 00:13:52.917 12630.575 - 12690.153: 42.7817% ( 184) 00:13:52.917 12690.153 - 12749.731: 44.7183% ( 176) 00:13:52.917 12749.731 - 12809.309: 46.5449% ( 166) 00:13:52.917 12809.309 - 12868.887: 48.3715% ( 166) 00:13:52.917 12868.887 - 12928.465: 50.2751% ( 173) 00:13:52.917 12928.465 - 12988.044: 52.1237% ( 168) 00:13:52.917 12988.044 - 13047.622: 53.9943% ( 170) 00:13:52.917 13047.622 - 13107.200: 55.7879% ( 163) 00:13:52.917 13107.200 - 13166.778: 57.6034% ( 165) 00:13:52.917 13166.778 - 13226.356: 59.2980% ( 154) 00:13:52.917 13226.356 - 13285.935: 60.8165% ( 138) 00:13:52.917 13285.935 - 13345.513: 62.1259% ( 119) 00:13:52.917 13345.513 - 13405.091: 63.4573% ( 121) 00:13:52.917 13405.091 - 13464.669: 64.5246% ( 97) 00:13:52.917 13464.669 - 13524.247: 65.6910% ( 106) 00:13:52.917 13524.247 - 13583.825: 66.7694% ( 98) 00:13:52.917 13583.825 - 13643.404: 67.8697% ( 100) 00:13:52.917 13643.404 - 13702.982: 68.9811% ( 101) 00:13:52.917 13702.982 - 13762.560: 69.8834% ( 82) 00:13:52.917 13762.560 - 13822.138: 70.5766% ( 63) 00:13:52.917 13822.138 - 13881.716: 71.2808% ( 64) 00:13:52.917 13881.716 - 13941.295: 71.9410% ( 60) 00:13:52.917 13941.295 - 14000.873: 72.5132% ( 52) 00:13:52.917 14000.873 - 14060.451: 73.0744% ( 51) 00:13:52.917 14060.451 - 14120.029: 73.6026% ( 48) 00:13:52.917 14120.029 - 14179.607: 74.1637% ( 51) 00:13:52.917 14179.607 - 14239.185: 74.6369% ( 43) 00:13:52.917 14239.185 - 14298.764: 75.1430% ( 46) 00:13:52.917 14298.764 - 14358.342: 75.6382% ( 45) 00:13:52.917 14358.342 - 14417.920: 76.1224% ( 44) 00:13:52.917 14417.920 - 14477.498: 76.5735% ( 41) 00:13:52.917 14477.498 - 14537.076: 76.9806% ( 37) 00:13:52.917 14537.076 - 14596.655: 77.3768% ( 36) 00:13:52.917 14596.655 - 14656.233: 77.6959% ( 29) 00:13:52.917 14656.233 - 14715.811: 78.0040% ( 28) 00:13:52.917 14715.811 - 14775.389: 78.4551% ( 41) 00:13:52.917 14775.389 - 14834.967: 78.8292% ( 34) 00:13:52.917 14834.967 - 14894.545: 79.2033% ( 34) 00:13:52.917 14894.545 - 14954.124: 79.5995% ( 36) 00:13:52.917 14954.124 - 15013.702: 80.0396% ( 40) 00:13:52.917 15013.702 - 15073.280: 80.5128% ( 43) 00:13:52.917 15073.280 - 15132.858: 80.9199% ( 37) 00:13:52.917 15132.858 - 15192.436: 81.3600% ( 40) 00:13:52.917 15192.436 - 15252.015: 81.6681% ( 28) 00:13:52.917 15252.015 - 15371.171: 82.1083% ( 40) 00:13:52.917 15371.171 - 15490.327: 82.7465% ( 58) 00:13:52.917 15490.327 - 15609.484: 83.3847% ( 58) 00:13:52.917 15609.484 - 15728.640: 84.1109% ( 66) 00:13:52.917 15728.640 - 15847.796: 84.8812% ( 70) 00:13:52.917 15847.796 - 15966.953: 85.4754% ( 54) 00:13:52.917 15966.953 - 16086.109: 86.0145% ( 49) 00:13:52.917 16086.109 - 16205.265: 86.4547% ( 40) 00:13:52.917 16205.265 - 16324.422: 87.0048% ( 50) 00:13:52.917 16324.422 - 16443.578: 87.4120% ( 37) 00:13:52.917 16443.578 - 16562.735: 87.7421% ( 30) 00:13:52.917 16562.735 - 16681.891: 87.9291% ( 17) 00:13:52.917 16681.891 - 16801.047: 88.1052% ( 16) 00:13:52.917 16801.047 - 16920.204: 88.2592% ( 14) 00:13:52.917 16920.204 - 17039.360: 88.5013% ( 22) 00:13:52.917 17039.360 - 17158.516: 88.7104% ( 19) 00:13:52.917 17158.516 - 17277.673: 88.9305% ( 20) 00:13:52.917 17277.673 - 17396.829: 89.1285% ( 18) 00:13:52.917 17396.829 - 17515.985: 89.4366% ( 28) 00:13:52.917 17515.985 - 17635.142: 89.7447% ( 28) 00:13:52.917 17635.142 - 17754.298: 89.9978% ( 23) 00:13:52.917 17754.298 - 17873.455: 90.2289% ( 21) 00:13:52.917 17873.455 - 17992.611: 90.4820% ( 23) 00:13:52.917 17992.611 - 18111.767: 90.6800% ( 18) 00:13:52.917 18111.767 - 18230.924: 90.9331% ( 23) 00:13:52.918 18230.924 - 18350.080: 91.1312% ( 18) 00:13:52.918 18350.080 - 18469.236: 91.3072% ( 16) 00:13:52.918 18469.236 - 18588.393: 91.5053% ( 18) 00:13:52.918 18588.393 - 18707.549: 91.7254% ( 20) 00:13:52.918 18707.549 - 18826.705: 91.9454% ( 20) 00:13:52.918 18826.705 - 18945.862: 92.1765% ( 21) 00:13:52.918 18945.862 - 19065.018: 92.3966% ( 20) 00:13:52.918 19065.018 - 19184.175: 92.6827% ( 26) 00:13:52.918 19184.175 - 19303.331: 93.0018% ( 29) 00:13:52.918 19303.331 - 19422.487: 93.2989% ( 27) 00:13:52.918 19422.487 - 19541.644: 93.5519% ( 23) 00:13:52.918 19541.644 - 19660.800: 93.8490% ( 27) 00:13:52.918 19660.800 - 19779.956: 94.1241% ( 25) 00:13:52.918 19779.956 - 19899.113: 94.5092% ( 35) 00:13:52.918 19899.113 - 20018.269: 94.9384% ( 39) 00:13:52.918 20018.269 - 20137.425: 95.1915% ( 23) 00:13:52.918 20137.425 - 20256.582: 95.4115% ( 20) 00:13:52.918 20256.582 - 20375.738: 95.6206% ( 19) 00:13:52.918 20375.738 - 20494.895: 95.8187% ( 18) 00:13:52.918 20494.895 - 20614.051: 96.0387% ( 20) 00:13:52.918 20614.051 - 20733.207: 96.2368% ( 18) 00:13:52.918 20733.207 - 20852.364: 96.4459% ( 19) 00:13:52.918 20852.364 - 20971.520: 96.7760% ( 30) 00:13:52.918 20971.520 - 21090.676: 97.1171% ( 31) 00:13:52.918 21090.676 - 21209.833: 97.3812% ( 24) 00:13:52.918 21209.833 - 21328.989: 97.6012% ( 20) 00:13:52.918 21328.989 - 21448.145: 97.7883% ( 17) 00:13:52.918 21448.145 - 21567.302: 97.8543% ( 6) 00:13:52.918 21567.302 - 21686.458: 97.8983% ( 4) 00:13:52.918 21686.458 - 21805.615: 97.9423% ( 4) 00:13:52.918 21805.615 - 21924.771: 97.9864% ( 4) 00:13:52.918 21924.771 - 22043.927: 98.0414% ( 5) 00:13:52.918 22043.927 - 22163.084: 98.0854% ( 4) 00:13:52.918 22163.084 - 22282.240: 98.1294% ( 4) 00:13:52.918 22282.240 - 22401.396: 98.1844% ( 5) 00:13:52.918 22401.396 - 22520.553: 98.2284% ( 4) 00:13:52.918 22520.553 - 22639.709: 98.2835% ( 5) 00:13:52.918 22639.709 - 22758.865: 98.3495% ( 6) 00:13:52.918 22758.865 - 22878.022: 98.4045% ( 5) 00:13:52.918 22878.022 - 22997.178: 98.4595% ( 5) 00:13:52.918 22997.178 - 23116.335: 98.5255% ( 6) 00:13:52.918 23116.335 - 23235.491: 98.5805% ( 5) 00:13:52.918 23235.491 - 23354.647: 98.5915% ( 1) 00:13:52.918 27286.807 - 27405.964: 98.6026% ( 1) 00:13:52.918 27405.964 - 27525.120: 98.6246% ( 2) 00:13:52.918 27525.120 - 27644.276: 98.6466% ( 2) 00:13:52.918 27644.276 - 27763.433: 98.6576% ( 1) 00:13:52.918 27763.433 - 27882.589: 98.6796% ( 2) 00:13:52.918 27882.589 - 28001.745: 98.7016% ( 2) 00:13:52.918 28001.745 - 28120.902: 98.7236% ( 2) 00:13:52.918 28120.902 - 28240.058: 98.7456% ( 2) 00:13:52.918 28240.058 - 28359.215: 98.7676% ( 2) 00:13:52.918 28359.215 - 28478.371: 98.7896% ( 2) 00:13:52.918 28478.371 - 28597.527: 98.8116% ( 2) 00:13:52.918 28597.527 - 28716.684: 98.8336% ( 2) 00:13:52.918 28716.684 - 28835.840: 98.8556% ( 2) 00:13:52.918 28835.840 - 28954.996: 98.8776% ( 2) 00:13:52.918 28954.996 - 29074.153: 98.8996% ( 2) 00:13:52.918 29074.153 - 29193.309: 98.9217% ( 2) 00:13:52.918 29193.309 - 29312.465: 98.9437% ( 2) 00:13:52.918 29312.465 - 29431.622: 98.9547% ( 1) 00:13:52.918 29431.622 - 29550.778: 98.9767% ( 2) 00:13:52.918 29550.778 - 29669.935: 98.9987% ( 2) 00:13:52.918 29669.935 - 29789.091: 99.0207% ( 2) 00:13:52.918 29789.091 - 29908.247: 99.0427% ( 2) 00:13:52.918 29908.247 - 30027.404: 99.0647% ( 2) 00:13:52.918 30027.404 - 30146.560: 99.0867% ( 2) 00:13:52.918 30146.560 - 30265.716: 99.1087% ( 2) 00:13:52.918 30265.716 - 30384.873: 99.1307% ( 2) 00:13:52.918 30384.873 - 30504.029: 99.1527% ( 2) 00:13:52.918 30504.029 - 30742.342: 99.1857% ( 3) 00:13:52.918 30742.342 - 30980.655: 99.2298% ( 4) 00:13:52.918 30980.655 - 31218.967: 99.2738% ( 4) 00:13:52.918 31218.967 - 31457.280: 99.2958% ( 2) 00:13:52.918 39321.600 - 39559.913: 99.3178% ( 2) 00:13:52.918 39559.913 - 39798.225: 99.3618% ( 4) 00:13:52.918 39798.225 - 40036.538: 99.4058% ( 4) 00:13:52.918 40036.538 - 40274.851: 99.4498% ( 4) 00:13:52.918 40274.851 - 40513.164: 99.4938% ( 4) 00:13:52.918 40513.164 - 40751.476: 99.5379% ( 4) 00:13:52.918 40751.476 - 40989.789: 99.5819% ( 4) 00:13:52.918 40989.789 - 41228.102: 99.6259% ( 4) 00:13:52.918 41228.102 - 41466.415: 99.6589% ( 3) 00:13:52.918 41466.415 - 41704.727: 99.7029% ( 4) 00:13:52.918 41704.727 - 41943.040: 99.7469% ( 4) 00:13:52.918 41943.040 - 42181.353: 99.7909% ( 4) 00:13:52.918 42181.353 - 42419.665: 99.8239% ( 3) 00:13:52.918 42419.665 - 42657.978: 99.8680% ( 4) 00:13:52.918 42657.978 - 42896.291: 99.9120% ( 4) 00:13:52.918 42896.291 - 43134.604: 99.9560% ( 4) 00:13:52.918 43134.604 - 43372.916: 100.0000% ( 4) 00:13:52.918 00:13:52.918 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:13:52.918 ============================================================================== 00:13:52.918 Range in us Cumulative IO count 00:13:52.918 10426.182 - 10485.760: 0.0220% ( 2) 00:13:52.918 10485.760 - 10545.338: 0.1100% ( 8) 00:13:52.918 10545.338 - 10604.916: 0.2201% ( 10) 00:13:52.918 10604.916 - 10664.495: 0.4071% ( 17) 00:13:52.918 10664.495 - 10724.073: 0.6272% ( 20) 00:13:52.918 10724.073 - 10783.651: 0.7812% ( 14) 00:13:52.918 10783.651 - 10843.229: 1.0893% ( 28) 00:13:52.918 10843.229 - 10902.807: 1.3864% ( 27) 00:13:52.918 10902.807 - 10962.385: 1.6725% ( 26) 00:13:52.918 10962.385 - 11021.964: 2.0577% ( 35) 00:13:52.918 11021.964 - 11081.542: 2.6078% ( 50) 00:13:52.918 11081.542 - 11141.120: 3.0260% ( 38) 00:13:52.918 11141.120 - 11200.698: 3.6202% ( 54) 00:13:52.918 11200.698 - 11260.276: 4.2914% ( 61) 00:13:52.918 11260.276 - 11319.855: 5.0616% ( 70) 00:13:52.918 11319.855 - 11379.433: 5.7218% ( 60) 00:13:52.918 11379.433 - 11439.011: 6.5581% ( 76) 00:13:52.918 11439.011 - 11498.589: 7.4274% ( 79) 00:13:52.918 11498.589 - 11558.167: 8.4287% ( 91) 00:13:52.918 11558.167 - 11617.745: 9.5951% ( 106) 00:13:52.918 11617.745 - 11677.324: 10.9155% ( 120) 00:13:52.918 11677.324 - 11736.902: 12.4230% ( 137) 00:13:52.918 11736.902 - 11796.480: 14.0735% ( 150) 00:13:52.918 11796.480 - 11856.058: 15.6580% ( 144) 00:13:52.918 11856.058 - 11915.636: 17.3305% ( 152) 00:13:52.918 11915.636 - 11975.215: 18.9040% ( 143) 00:13:52.918 11975.215 - 12034.793: 20.6536% ( 159) 00:13:52.918 12034.793 - 12094.371: 22.3812% ( 157) 00:13:52.918 12094.371 - 12153.949: 24.1197% ( 158) 00:13:52.918 12153.949 - 12213.527: 26.1884% ( 188) 00:13:52.918 12213.527 - 12273.105: 28.3121% ( 193) 00:13:52.918 12273.105 - 12332.684: 30.2487% ( 176) 00:13:52.919 12332.684 - 12392.262: 32.3393% ( 190) 00:13:52.919 12392.262 - 12451.840: 34.4850% ( 195) 00:13:52.919 12451.840 - 12511.418: 36.7188% ( 203) 00:13:52.919 12511.418 - 12570.996: 38.8204% ( 191) 00:13:52.919 12570.996 - 12630.575: 40.9551% ( 194) 00:13:52.919 12630.575 - 12690.153: 42.9137% ( 178) 00:13:52.919 12690.153 - 12749.731: 44.9494% ( 185) 00:13:52.919 12749.731 - 12809.309: 46.8640% ( 174) 00:13:52.919 12809.309 - 12868.887: 48.7346% ( 170) 00:13:52.919 12868.887 - 12928.465: 50.5722% ( 167) 00:13:52.919 12928.465 - 12988.044: 52.3658% ( 163) 00:13:52.919 12988.044 - 13047.622: 54.1593% ( 163) 00:13:52.919 13047.622 - 13107.200: 55.9419% ( 162) 00:13:52.919 13107.200 - 13166.778: 57.6805% ( 158) 00:13:52.919 13166.778 - 13226.356: 59.2760% ( 145) 00:13:52.919 13226.356 - 13285.935: 60.8165% ( 140) 00:13:52.919 13285.935 - 13345.513: 62.1589% ( 122) 00:13:52.919 13345.513 - 13405.091: 63.4023% ( 113) 00:13:52.919 13405.091 - 13464.669: 64.6017% ( 109) 00:13:52.919 13464.669 - 13524.247: 65.8561% ( 114) 00:13:52.919 13524.247 - 13583.825: 67.0445% ( 108) 00:13:52.919 13583.825 - 13643.404: 68.2548% ( 110) 00:13:52.919 13643.404 - 13702.982: 69.3112% ( 96) 00:13:52.919 13702.982 - 13762.560: 70.2355% ( 84) 00:13:52.919 13762.560 - 13822.138: 70.9727% ( 67) 00:13:52.919 13822.138 - 13881.716: 71.5999% ( 57) 00:13:52.919 13881.716 - 13941.295: 72.2821% ( 62) 00:13:52.919 13941.295 - 14000.873: 72.9864% ( 64) 00:13:52.919 14000.873 - 14060.451: 73.5805% ( 54) 00:13:52.919 14060.451 - 14120.029: 74.2408% ( 60) 00:13:52.919 14120.029 - 14179.607: 74.8570% ( 56) 00:13:52.919 14179.607 - 14239.185: 75.4401% ( 53) 00:13:52.919 14239.185 - 14298.764: 75.9243% ( 44) 00:13:52.919 14298.764 - 14358.342: 76.2984% ( 34) 00:13:52.919 14358.342 - 14417.920: 76.8046% ( 46) 00:13:52.919 14417.920 - 14477.498: 77.1567% ( 32) 00:13:52.919 14477.498 - 14537.076: 77.5638% ( 37) 00:13:52.919 14537.076 - 14596.655: 78.0040% ( 40) 00:13:52.919 14596.655 - 14656.233: 78.4441% ( 40) 00:13:52.919 14656.233 - 14715.811: 78.8732% ( 39) 00:13:52.919 14715.811 - 14775.389: 79.3134% ( 40) 00:13:52.919 14775.389 - 14834.967: 79.6765% ( 33) 00:13:52.919 14834.967 - 14894.545: 80.0616% ( 35) 00:13:52.919 14894.545 - 14954.124: 80.4577% ( 36) 00:13:52.919 14954.124 - 15013.702: 80.8759% ( 38) 00:13:52.919 15013.702 - 15073.280: 81.3050% ( 39) 00:13:52.919 15073.280 - 15132.858: 81.7232% ( 38) 00:13:52.919 15132.858 - 15192.436: 82.1083% ( 35) 00:13:52.919 15192.436 - 15252.015: 82.4164% ( 28) 00:13:52.919 15252.015 - 15371.171: 83.0216% ( 55) 00:13:52.919 15371.171 - 15490.327: 83.6928% ( 61) 00:13:52.919 15490.327 - 15609.484: 84.2430% ( 50) 00:13:52.919 15609.484 - 15728.640: 84.8592% ( 56) 00:13:52.919 15728.640 - 15847.796: 85.5414% ( 62) 00:13:52.919 15847.796 - 15966.953: 86.2236% ( 62) 00:13:52.919 15966.953 - 16086.109: 86.8288% ( 55) 00:13:52.919 16086.109 - 16205.265: 87.3460% ( 47) 00:13:52.919 16205.265 - 16324.422: 87.7641% ( 38) 00:13:52.919 16324.422 - 16443.578: 88.1382% ( 34) 00:13:52.919 16443.578 - 16562.735: 88.4023% ( 24) 00:13:52.919 16562.735 - 16681.891: 88.5893% ( 17) 00:13:52.919 16681.891 - 16801.047: 88.7544% ( 15) 00:13:52.919 16801.047 - 16920.204: 88.9305% ( 16) 00:13:52.919 16920.204 - 17039.360: 89.1065% ( 16) 00:13:52.919 17039.360 - 17158.516: 89.2165% ( 10) 00:13:52.919 17158.516 - 17277.673: 89.4256% ( 19) 00:13:52.919 17277.673 - 17396.829: 89.7337% ( 28) 00:13:52.919 17396.829 - 17515.985: 89.8988% ( 15) 00:13:52.919 17515.985 - 17635.142: 90.0748% ( 16) 00:13:52.919 17635.142 - 17754.298: 90.3169% ( 22) 00:13:52.919 17754.298 - 17873.455: 90.5590% ( 22) 00:13:52.919 17873.455 - 17992.611: 90.7790% ( 20) 00:13:52.919 17992.611 - 18111.767: 90.9771% ( 18) 00:13:52.919 18111.767 - 18230.924: 91.2082% ( 21) 00:13:52.919 18230.924 - 18350.080: 91.3952% ( 17) 00:13:52.919 18350.080 - 18469.236: 91.5713% ( 16) 00:13:52.919 18469.236 - 18588.393: 91.7364% ( 15) 00:13:52.919 18588.393 - 18707.549: 91.9454% ( 19) 00:13:52.919 18707.549 - 18826.705: 92.1655% ( 20) 00:13:52.919 18826.705 - 18945.862: 92.4076% ( 22) 00:13:52.919 18945.862 - 19065.018: 92.6386% ( 21) 00:13:52.919 19065.018 - 19184.175: 92.9137% ( 25) 00:13:52.919 19184.175 - 19303.331: 93.1228% ( 19) 00:13:52.919 19303.331 - 19422.487: 93.3979% ( 25) 00:13:52.919 19422.487 - 19541.644: 93.6950% ( 27) 00:13:52.919 19541.644 - 19660.800: 93.9261% ( 21) 00:13:52.919 19660.800 - 19779.956: 94.2121% ( 26) 00:13:52.919 19779.956 - 19899.113: 94.4982% ( 26) 00:13:52.919 19899.113 - 20018.269: 94.7953% ( 27) 00:13:52.919 20018.269 - 20137.425: 95.0044% ( 19) 00:13:52.919 20137.425 - 20256.582: 95.1915% ( 17) 00:13:52.919 20256.582 - 20375.738: 95.3895% ( 18) 00:13:52.919 20375.738 - 20494.895: 95.5766% ( 17) 00:13:52.919 20494.895 - 20614.051: 95.7306% ( 14) 00:13:52.919 20614.051 - 20733.207: 95.9177% ( 17) 00:13:52.919 20733.207 - 20852.364: 96.1268% ( 19) 00:13:52.919 20852.364 - 20971.520: 96.3578% ( 21) 00:13:52.919 20971.520 - 21090.676: 96.5339% ( 16) 00:13:52.919 21090.676 - 21209.833: 96.6879% ( 14) 00:13:52.919 21209.833 - 21328.989: 96.8530% ( 15) 00:13:52.919 21328.989 - 21448.145: 96.9740% ( 11) 00:13:52.919 21448.145 - 21567.302: 97.0731% ( 9) 00:13:52.919 21567.302 - 21686.458: 97.1281% ( 5) 00:13:52.919 21686.458 - 21805.615: 97.1831% ( 5) 00:13:52.919 21805.615 - 21924.771: 97.2381% ( 5) 00:13:52.919 21924.771 - 22043.927: 97.2821% ( 4) 00:13:52.919 22043.927 - 22163.084: 97.3371% ( 5) 00:13:52.919 22163.084 - 22282.240: 97.3922% ( 5) 00:13:52.919 22282.240 - 22401.396: 97.4582% ( 6) 00:13:52.919 22401.396 - 22520.553: 97.5132% ( 5) 00:13:52.919 22520.553 - 22639.709: 97.5792% ( 6) 00:13:52.919 22639.709 - 22758.865: 97.6342% ( 5) 00:13:52.919 22758.865 - 22878.022: 97.6893% ( 5) 00:13:52.919 22878.022 - 22997.178: 97.7553% ( 6) 00:13:52.919 22997.178 - 23116.335: 97.8213% ( 6) 00:13:52.919 23116.335 - 23235.491: 97.8763% ( 5) 00:13:52.919 23235.491 - 23354.647: 97.8873% ( 1) 00:13:52.919 23712.116 - 23831.273: 97.9754% ( 8) 00:13:52.919 23831.273 - 23950.429: 98.0524% ( 7) 00:13:52.919 23950.429 - 24069.585: 98.1184% ( 6) 00:13:52.919 24069.585 - 24188.742: 98.1734% ( 5) 00:13:52.919 24188.742 - 24307.898: 98.2174% ( 4) 00:13:52.919 24307.898 - 24427.055: 98.2835% ( 6) 00:13:52.919 24427.055 - 24546.211: 98.3275% ( 4) 00:13:52.919 24546.211 - 24665.367: 98.4045% ( 7) 00:13:52.919 24665.367 - 24784.524: 98.4595% ( 5) 00:13:52.919 24784.524 - 24903.680: 98.5145% ( 5) 00:13:52.919 24903.680 - 25022.836: 98.5805% ( 6) 00:13:52.919 25022.836 - 25141.993: 98.6356% ( 5) 00:13:52.919 25141.993 - 25261.149: 98.7016% ( 6) 00:13:52.919 25261.149 - 25380.305: 98.7676% ( 6) 00:13:52.919 25380.305 - 25499.462: 98.8226% ( 5) 00:13:52.919 25499.462 - 25618.618: 98.8886% ( 6) 00:13:52.919 25618.618 - 25737.775: 98.9437% ( 5) 00:13:52.919 25737.775 - 25856.931: 98.9767% ( 3) 00:13:52.919 25856.931 - 25976.087: 98.9987% ( 2) 00:13:52.919 25976.087 - 26095.244: 99.0207% ( 2) 00:13:52.919 26095.244 - 26214.400: 99.0317% ( 1) 00:13:52.919 26214.400 - 26333.556: 99.0537% ( 2) 00:13:52.919 26333.556 - 26452.713: 99.0757% ( 2) 00:13:52.919 26452.713 - 26571.869: 99.0977% ( 2) 00:13:52.919 26571.869 - 26691.025: 99.1197% ( 2) 00:13:52.919 26691.025 - 26810.182: 99.1417% ( 2) 00:13:52.919 26810.182 - 26929.338: 99.1637% ( 2) 00:13:52.919 26929.338 - 27048.495: 99.1857% ( 2) 00:13:52.919 27048.495 - 27167.651: 99.2077% ( 2) 00:13:52.919 27167.651 - 27286.807: 99.2298% ( 2) 00:13:52.919 27286.807 - 27405.964: 99.2518% ( 2) 00:13:52.919 27405.964 - 27525.120: 99.2738% ( 2) 00:13:52.919 27525.120 - 27644.276: 99.2848% ( 1) 00:13:52.919 27644.276 - 27763.433: 99.2958% ( 1) 00:13:52.919 35746.909 - 35985.222: 99.3398% ( 4) 00:13:52.919 35985.222 - 36223.535: 99.3838% ( 4) 00:13:52.919 36223.535 - 36461.847: 99.4278% ( 4) 00:13:52.919 36461.847 - 36700.160: 99.4718% ( 4) 00:13:52.919 36700.160 - 36938.473: 99.5048% ( 3) 00:13:52.919 36938.473 - 37176.785: 99.5489% ( 4) 00:13:52.919 37176.785 - 37415.098: 99.5819% ( 3) 00:13:52.919 37415.098 - 37653.411: 99.6259% ( 4) 00:13:52.919 37653.411 - 37891.724: 99.6699% ( 4) 00:13:52.919 37891.724 - 38130.036: 99.7139% ( 4) 00:13:52.919 38130.036 - 38368.349: 99.7579% ( 4) 00:13:52.919 38368.349 - 38606.662: 99.8019% ( 4) 00:13:52.919 38606.662 - 38844.975: 99.8460% ( 4) 00:13:52.919 38844.975 - 39083.287: 99.8900% ( 4) 00:13:52.919 39083.287 - 39321.600: 99.9230% ( 3) 00:13:52.919 39321.600 - 39559.913: 99.9670% ( 4) 00:13:52.919 39559.913 - 39798.225: 100.0000% ( 3) 00:13:52.919 00:13:52.919 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:13:52.919 ============================================================================== 00:13:52.919 Range in us Cumulative IO count 00:13:52.919 10307.025 - 10366.604: 0.0110% ( 1) 00:13:52.919 10426.182 - 10485.760: 0.0330% ( 2) 00:13:52.919 10485.760 - 10545.338: 0.1100% ( 7) 00:13:52.919 10545.338 - 10604.916: 0.1981% ( 8) 00:13:52.919 10604.916 - 10664.495: 0.3081% ( 10) 00:13:52.919 10664.495 - 10724.073: 0.4732% ( 15) 00:13:52.919 10724.073 - 10783.651: 0.7042% ( 21) 00:13:52.919 10783.651 - 10843.229: 0.9133% ( 19) 00:13:52.919 10843.229 - 10902.807: 1.1114% ( 18) 00:13:52.919 10902.807 - 10962.385: 1.4415% ( 30) 00:13:52.919 10962.385 - 11021.964: 1.7606% ( 29) 00:13:52.919 11021.964 - 11081.542: 2.2007% ( 40) 00:13:52.919 11081.542 - 11141.120: 2.8499% ( 59) 00:13:52.920 11141.120 - 11200.698: 3.4991% ( 59) 00:13:52.920 11200.698 - 11260.276: 4.1373% ( 58) 00:13:52.920 11260.276 - 11319.855: 4.7865% ( 59) 00:13:52.920 11319.855 - 11379.433: 5.6668% ( 80) 00:13:52.920 11379.433 - 11439.011: 6.3600% ( 63) 00:13:52.920 11439.011 - 11498.589: 7.3504% ( 90) 00:13:52.920 11498.589 - 11558.167: 8.6708% ( 120) 00:13:52.920 11558.167 - 11617.745: 9.9692% ( 118) 00:13:52.920 11617.745 - 11677.324: 11.2676% ( 118) 00:13:52.920 11677.324 - 11736.902: 12.7531% ( 135) 00:13:52.920 11736.902 - 11796.480: 14.4146% ( 151) 00:13:52.920 11796.480 - 11856.058: 16.0541% ( 149) 00:13:52.920 11856.058 - 11915.636: 17.8147% ( 160) 00:13:52.920 11915.636 - 11975.215: 19.6963% ( 171) 00:13:52.920 11975.215 - 12034.793: 21.4899% ( 163) 00:13:52.920 12034.793 - 12094.371: 23.2394% ( 159) 00:13:52.920 12094.371 - 12153.949: 25.0000% ( 160) 00:13:52.920 12153.949 - 12213.527: 26.8816% ( 171) 00:13:52.920 12213.527 - 12273.105: 28.9173% ( 185) 00:13:52.920 12273.105 - 12332.684: 31.0299% ( 192) 00:13:52.920 12332.684 - 12392.262: 33.1536% ( 193) 00:13:52.920 12392.262 - 12451.840: 35.2663% ( 192) 00:13:52.920 12451.840 - 12511.418: 37.3680% ( 191) 00:13:52.920 12511.418 - 12570.996: 39.4366% ( 188) 00:13:52.920 12570.996 - 12630.575: 41.4833% ( 186) 00:13:52.920 12630.575 - 12690.153: 43.4969% ( 183) 00:13:52.920 12690.153 - 12749.731: 45.4886% ( 181) 00:13:52.920 12749.731 - 12809.309: 47.4582% ( 179) 00:13:52.920 12809.309 - 12868.887: 49.2298% ( 161) 00:13:52.920 12868.887 - 12928.465: 50.8913% ( 151) 00:13:52.920 12928.465 - 12988.044: 52.6518% ( 160) 00:13:52.920 12988.044 - 13047.622: 54.5114% ( 169) 00:13:52.920 13047.622 - 13107.200: 56.2500% ( 158) 00:13:52.920 13107.200 - 13166.778: 57.8345% ( 144) 00:13:52.920 13166.778 - 13226.356: 59.3860% ( 141) 00:13:52.920 13226.356 - 13285.935: 60.8165% ( 130) 00:13:52.920 13285.935 - 13345.513: 62.1259% ( 119) 00:13:52.920 13345.513 - 13405.091: 63.3803% ( 114) 00:13:52.920 13405.091 - 13464.669: 64.7227% ( 122) 00:13:52.920 13464.669 - 13524.247: 65.9881% ( 115) 00:13:52.920 13524.247 - 13583.825: 67.2645% ( 116) 00:13:52.920 13583.825 - 13643.404: 68.3759% ( 101) 00:13:52.920 13643.404 - 13702.982: 69.4102% ( 94) 00:13:52.920 13702.982 - 13762.560: 70.2465% ( 76) 00:13:52.920 13762.560 - 13822.138: 71.0277% ( 71) 00:13:52.920 13822.138 - 13881.716: 71.7210% ( 63) 00:13:52.920 13881.716 - 13941.295: 72.3812% ( 60) 00:13:52.920 13941.295 - 14000.873: 73.0194% ( 58) 00:13:52.920 14000.873 - 14060.451: 73.6136% ( 54) 00:13:52.920 14060.451 - 14120.029: 74.2077% ( 54) 00:13:52.920 14120.029 - 14179.607: 74.8019% ( 54) 00:13:52.920 14179.607 - 14239.185: 75.3191% ( 47) 00:13:52.920 14239.185 - 14298.764: 75.8143% ( 45) 00:13:52.920 14298.764 - 14358.342: 76.2984% ( 44) 00:13:52.920 14358.342 - 14417.920: 76.6725% ( 34) 00:13:52.920 14417.920 - 14477.498: 77.1237% ( 41) 00:13:52.920 14477.498 - 14537.076: 77.5968% ( 43) 00:13:52.920 14537.076 - 14596.655: 77.9489% ( 32) 00:13:52.920 14596.655 - 14656.233: 78.3561% ( 37) 00:13:52.920 14656.233 - 14715.811: 78.7412% ( 35) 00:13:52.920 14715.811 - 14775.389: 79.2033% ( 42) 00:13:52.920 14775.389 - 14834.967: 79.6985% ( 45) 00:13:52.920 14834.967 - 14894.545: 80.1717% ( 43) 00:13:52.920 14894.545 - 14954.124: 80.5568% ( 35) 00:13:52.920 14954.124 - 15013.702: 80.9749% ( 38) 00:13:52.920 15013.702 - 15073.280: 81.4371% ( 42) 00:13:52.920 15073.280 - 15132.858: 81.7232% ( 26) 00:13:52.920 15132.858 - 15192.436: 82.0753% ( 32) 00:13:52.920 15192.436 - 15252.015: 82.3724% ( 27) 00:13:52.920 15252.015 - 15371.171: 83.0766% ( 64) 00:13:52.920 15371.171 - 15490.327: 83.8798% ( 73) 00:13:52.920 15490.327 - 15609.484: 84.4300% ( 50) 00:13:52.920 15609.484 - 15728.640: 85.0242% ( 54) 00:13:52.920 15728.640 - 15847.796: 85.6624% ( 58) 00:13:52.920 15847.796 - 15966.953: 86.2126% ( 50) 00:13:52.920 15966.953 - 16086.109: 86.7077% ( 45) 00:13:52.920 16086.109 - 16205.265: 87.0599% ( 32) 00:13:52.920 16205.265 - 16324.422: 87.5110% ( 41) 00:13:52.920 16324.422 - 16443.578: 87.8081% ( 27) 00:13:52.920 16443.578 - 16562.735: 87.9732% ( 15) 00:13:52.920 16562.735 - 16681.891: 88.0942% ( 11) 00:13:52.920 16681.891 - 16801.047: 88.1822% ( 8) 00:13:52.920 16801.047 - 16920.204: 88.3033% ( 11) 00:13:52.920 16920.204 - 17039.360: 88.4243% ( 11) 00:13:52.920 17039.360 - 17158.516: 88.6004% ( 16) 00:13:52.920 17158.516 - 17277.673: 88.7984% ( 18) 00:13:52.920 17277.673 - 17396.829: 89.1175% ( 29) 00:13:52.920 17396.829 - 17515.985: 89.3596% ( 22) 00:13:52.920 17515.985 - 17635.142: 89.5907% ( 21) 00:13:52.920 17635.142 - 17754.298: 89.8548% ( 24) 00:13:52.920 17754.298 - 17873.455: 90.1518% ( 27) 00:13:52.920 17873.455 - 17992.611: 90.4159% ( 24) 00:13:52.920 17992.611 - 18111.767: 90.6910% ( 25) 00:13:52.920 18111.767 - 18230.924: 90.9441% ( 23) 00:13:52.920 18230.924 - 18350.080: 91.2192% ( 25) 00:13:52.920 18350.080 - 18469.236: 91.4943% ( 25) 00:13:52.920 18469.236 - 18588.393: 91.7584% ( 24) 00:13:52.920 18588.393 - 18707.549: 91.9894% ( 21) 00:13:52.920 18707.549 - 18826.705: 92.3526% ( 33) 00:13:52.920 18826.705 - 18945.862: 92.5946% ( 22) 00:13:52.920 18945.862 - 19065.018: 92.8257% ( 21) 00:13:52.920 19065.018 - 19184.175: 93.0898% ( 24) 00:13:52.920 19184.175 - 19303.331: 93.3759% ( 26) 00:13:52.920 19303.331 - 19422.487: 93.6730% ( 27) 00:13:52.920 19422.487 - 19541.644: 93.9261% ( 23) 00:13:52.920 19541.644 - 19660.800: 94.1131% ( 17) 00:13:52.920 19660.800 - 19779.956: 94.3882% ( 25) 00:13:52.920 19779.956 - 19899.113: 94.6853% ( 27) 00:13:52.920 19899.113 - 20018.269: 94.9164% ( 21) 00:13:52.920 20018.269 - 20137.425: 95.1254% ( 19) 00:13:52.920 20137.425 - 20256.582: 95.3345% ( 19) 00:13:52.920 20256.582 - 20375.738: 95.5106% ( 16) 00:13:52.920 20375.738 - 20494.895: 95.6426% ( 12) 00:13:52.920 20494.895 - 20614.051: 95.8077% ( 15) 00:13:52.920 20614.051 - 20733.207: 96.0277% ( 20) 00:13:52.920 20733.207 - 20852.364: 96.2918% ( 24) 00:13:52.920 20852.364 - 20971.520: 96.5559% ( 24) 00:13:52.920 20971.520 - 21090.676: 96.7650% ( 19) 00:13:52.920 21090.676 - 21209.833: 96.8860% ( 11) 00:13:52.920 21209.833 - 21328.989: 97.0180% ( 12) 00:13:52.920 21328.989 - 21448.145: 97.1611% ( 13) 00:13:52.920 21448.145 - 21567.302: 97.2491% ( 8) 00:13:52.920 21567.302 - 21686.458: 97.3371% ( 8) 00:13:52.920 21686.458 - 21805.615: 97.4472% ( 10) 00:13:52.920 21805.615 - 21924.771: 97.5352% ( 8) 00:13:52.920 21924.771 - 22043.927: 97.6232% ( 8) 00:13:52.920 22043.927 - 22163.084: 97.7003% ( 7) 00:13:52.920 22163.084 - 22282.240: 97.7883% ( 8) 00:13:52.920 22282.240 - 22401.396: 97.8653% ( 7) 00:13:52.920 22401.396 - 22520.553: 97.9423% ( 7) 00:13:52.920 22520.553 - 22639.709: 98.0304% ( 8) 00:13:52.920 22639.709 - 22758.865: 98.1074% ( 7) 00:13:52.920 22758.865 - 22878.022: 98.1954% ( 8) 00:13:52.920 22878.022 - 22997.178: 98.2724% ( 7) 00:13:52.920 22997.178 - 23116.335: 98.3605% ( 8) 00:13:52.920 23116.335 - 23235.491: 98.4705% ( 10) 00:13:52.920 23235.491 - 23354.647: 98.5695% ( 9) 00:13:52.920 23354.647 - 23473.804: 98.6246% ( 5) 00:13:52.920 23473.804 - 23592.960: 98.6796% ( 5) 00:13:52.920 23592.960 - 23712.116: 98.7346% ( 5) 00:13:52.920 23712.116 - 23831.273: 98.7896% ( 5) 00:13:52.920 23831.273 - 23950.429: 98.8556% ( 6) 00:13:52.920 23950.429 - 24069.585: 98.9107% ( 5) 00:13:52.920 24069.585 - 24188.742: 98.9547% ( 4) 00:13:52.920 24188.742 - 24307.898: 98.9877% ( 3) 00:13:52.920 24307.898 - 24427.055: 99.0317% ( 4) 00:13:52.920 24427.055 - 24546.211: 99.0757% ( 4) 00:13:52.920 24546.211 - 24665.367: 99.1087% ( 3) 00:13:52.920 24665.367 - 24784.524: 99.1527% ( 4) 00:13:52.920 24784.524 - 24903.680: 99.1967% ( 4) 00:13:52.920 24903.680 - 25022.836: 99.2408% ( 4) 00:13:52.920 25022.836 - 25141.993: 99.2738% ( 3) 00:13:52.920 25141.993 - 25261.149: 99.2958% ( 2) 00:13:52.920 31933.905 - 32172.218: 99.3068% ( 1) 00:13:52.920 32172.218 - 32410.531: 99.3398% ( 3) 00:13:52.920 32410.531 - 32648.844: 99.3838% ( 4) 00:13:52.920 32648.844 - 32887.156: 99.4278% ( 4) 00:13:52.920 32887.156 - 33125.469: 99.4718% ( 4) 00:13:52.920 33125.469 - 33363.782: 99.5158% ( 4) 00:13:52.920 33363.782 - 33602.095: 99.5599% ( 4) 00:13:52.920 33602.095 - 33840.407: 99.6039% ( 4) 00:13:52.920 33840.407 - 34078.720: 99.6479% ( 4) 00:13:52.920 34078.720 - 34317.033: 99.6919% ( 4) 00:13:52.920 34317.033 - 34555.345: 99.7359% ( 4) 00:13:52.920 34555.345 - 34793.658: 99.7689% ( 3) 00:13:52.920 34793.658 - 35031.971: 99.8129% ( 4) 00:13:52.920 35031.971 - 35270.284: 99.8570% ( 4) 00:13:52.920 35270.284 - 35508.596: 99.9010% ( 4) 00:13:52.920 35508.596 - 35746.909: 99.9450% ( 4) 00:13:52.920 35746.909 - 35985.222: 99.9890% ( 4) 00:13:52.920 35985.222 - 36223.535: 100.0000% ( 1) 00:13:52.920 00:13:53.179 03:42:07 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:13:53.179 00:13:53.179 real 0m2.881s 00:13:53.179 user 0m2.410s 00:13:53.179 sys 0m0.343s 00:13:53.179 03:42:07 nvme.nvme_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:53.179 03:42:07 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:13:53.179 ************************************ 00:13:53.179 END TEST nvme_perf 00:13:53.179 ************************************ 00:13:53.179 03:42:07 nvme -- common/autotest_common.sh@1142 -- # return 0 00:13:53.179 03:42:07 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:13:53.179 03:42:07 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:53.179 03:42:07 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:53.179 03:42:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:53.179 ************************************ 00:13:53.179 START TEST nvme_hello_world 00:13:53.179 ************************************ 00:13:53.179 03:42:07 nvme.nvme_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:13:53.437 Initializing NVMe Controllers 00:13:53.437 Attached to 0000:00:10.0 00:13:53.437 Namespace ID: 1 size: 6GB 00:13:53.437 Attached to 0000:00:11.0 00:13:53.437 Namespace ID: 1 size: 5GB 00:13:53.437 Attached to 0000:00:13.0 00:13:53.437 Namespace ID: 1 size: 1GB 00:13:53.437 Attached to 0000:00:12.0 00:13:53.437 Namespace ID: 1 size: 4GB 00:13:53.437 Namespace ID: 2 size: 4GB 00:13:53.437 Namespace ID: 3 size: 4GB 00:13:53.437 Initialization complete. 00:13:53.437 INFO: using host memory buffer for IO 00:13:53.437 Hello world! 00:13:53.437 INFO: using host memory buffer for IO 00:13:53.437 Hello world! 00:13:53.437 INFO: using host memory buffer for IO 00:13:53.437 Hello world! 00:13:53.437 INFO: using host memory buffer for IO 00:13:53.437 Hello world! 00:13:53.437 INFO: using host memory buffer for IO 00:13:53.437 Hello world! 00:13:53.437 INFO: using host memory buffer for IO 00:13:53.437 Hello world! 00:13:53.437 00:13:53.437 real 0m0.299s 00:13:53.437 user 0m0.126s 00:13:53.437 sys 0m0.123s 00:13:53.437 03:42:08 nvme.nvme_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:53.437 03:42:08 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:53.437 ************************************ 00:13:53.437 END TEST nvme_hello_world 00:13:53.437 ************************************ 00:13:53.437 03:42:08 nvme -- common/autotest_common.sh@1142 -- # return 0 00:13:53.437 03:42:08 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:13:53.437 03:42:08 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:53.437 03:42:08 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:53.437 03:42:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:53.437 ************************************ 00:13:53.437 START TEST nvme_sgl 00:13:53.437 ************************************ 00:13:53.437 03:42:08 nvme.nvme_sgl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:13:53.695 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:13:53.695 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:13:53.695 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:13:53.695 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:13:53.695 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:13:53.695 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:13:53.695 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:13:53.695 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:13:53.695 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:13:53.695 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:13:53.695 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:13:53.695 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:13:53.695 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:13:53.695 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:13:53.695 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:13:53.695 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:13:53.695 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:13:53.695 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:13:53.695 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:13:53.695 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:13:53.695 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:13:53.695 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:13:53.696 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:13:53.696 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:13:53.696 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:13:53.696 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:13:53.696 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:13:53.696 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:13:53.696 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:13:53.696 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:13:53.696 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:13:53.696 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:13:53.696 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:13:53.696 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:13:53.696 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:13:53.696 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:13:53.953 NVMe Readv/Writev Request test 00:13:53.953 Attached to 0000:00:10.0 00:13:53.953 Attached to 0000:00:11.0 00:13:53.953 Attached to 0000:00:13.0 00:13:53.953 Attached to 0000:00:12.0 00:13:53.953 0000:00:10.0: build_io_request_2 test passed 00:13:53.953 0000:00:10.0: build_io_request_4 test passed 00:13:53.953 0000:00:10.0: build_io_request_5 test passed 00:13:53.953 0000:00:10.0: build_io_request_6 test passed 00:13:53.953 0000:00:10.0: build_io_request_7 test passed 00:13:53.953 0000:00:10.0: build_io_request_10 test passed 00:13:53.954 0000:00:11.0: build_io_request_2 test passed 00:13:53.954 0000:00:11.0: build_io_request_4 test passed 00:13:53.954 0000:00:11.0: build_io_request_5 test passed 00:13:53.954 0000:00:11.0: build_io_request_6 test passed 00:13:53.954 0000:00:11.0: build_io_request_7 test passed 00:13:53.954 0000:00:11.0: build_io_request_10 test passed 00:13:53.954 Cleaning up... 00:13:53.954 00:13:53.954 real 0m0.430s 00:13:53.954 user 0m0.208s 00:13:53.954 sys 0m0.165s 00:13:53.954 03:42:08 nvme.nvme_sgl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:53.954 03:42:08 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:13:53.954 ************************************ 00:13:53.954 END TEST nvme_sgl 00:13:53.954 ************************************ 00:13:53.954 03:42:08 nvme -- common/autotest_common.sh@1142 -- # return 0 00:13:53.954 03:42:08 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:13:53.954 03:42:08 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:53.954 03:42:08 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:53.954 03:42:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:53.954 ************************************ 00:13:53.954 START TEST nvme_e2edp 00:13:53.954 ************************************ 00:13:53.954 03:42:08 nvme.nvme_e2edp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:13:54.212 NVMe Write/Read with End-to-End data protection test 00:13:54.212 Attached to 0000:00:10.0 00:13:54.212 Attached to 0000:00:11.0 00:13:54.212 Attached to 0000:00:13.0 00:13:54.212 Attached to 0000:00:12.0 00:13:54.212 Cleaning up... 00:13:54.212 00:13:54.212 real 0m0.371s 00:13:54.212 user 0m0.166s 00:13:54.212 sys 0m0.150s 00:13:54.212 03:42:09 nvme.nvme_e2edp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:54.212 03:42:09 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:13:54.212 ************************************ 00:13:54.212 END TEST nvme_e2edp 00:13:54.212 ************************************ 00:13:54.212 03:42:09 nvme -- common/autotest_common.sh@1142 -- # return 0 00:13:54.212 03:42:09 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:13:54.212 03:42:09 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:54.212 03:42:09 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:54.212 03:42:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:54.212 ************************************ 00:13:54.212 START TEST nvme_reserve 00:13:54.212 ************************************ 00:13:54.212 03:42:09 nvme.nvme_reserve -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:13:54.788 ===================================================== 00:13:54.788 NVMe Controller at PCI bus 0, device 16, function 0 00:13:54.788 ===================================================== 00:13:54.788 Reservations: Not Supported 00:13:54.788 ===================================================== 00:13:54.788 NVMe Controller at PCI bus 0, device 17, function 0 00:13:54.788 ===================================================== 00:13:54.788 Reservations: Not Supported 00:13:54.788 ===================================================== 00:13:54.788 NVMe Controller at PCI bus 0, device 19, function 0 00:13:54.788 ===================================================== 00:13:54.788 Reservations: Not Supported 00:13:54.788 ===================================================== 00:13:54.788 NVMe Controller at PCI bus 0, device 18, function 0 00:13:54.788 ===================================================== 00:13:54.788 Reservations: Not Supported 00:13:54.788 Reservation test passed 00:13:54.788 00:13:54.788 real 0m0.318s 00:13:54.788 user 0m0.107s 00:13:54.788 sys 0m0.164s 00:13:54.788 03:42:09 nvme.nvme_reserve -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:54.788 03:42:09 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:13:54.788 ************************************ 00:13:54.788 END TEST nvme_reserve 00:13:54.788 ************************************ 00:13:54.788 03:42:09 nvme -- common/autotest_common.sh@1142 -- # return 0 00:13:54.788 03:42:09 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:13:54.788 03:42:09 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:54.788 03:42:09 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:54.788 03:42:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:54.788 ************************************ 00:13:54.788 START TEST nvme_err_injection 00:13:54.788 ************************************ 00:13:54.788 03:42:09 nvme.nvme_err_injection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:13:55.046 NVMe Error Injection test 00:13:55.046 Attached to 0000:00:10.0 00:13:55.046 Attached to 0000:00:11.0 00:13:55.046 Attached to 0000:00:13.0 00:13:55.046 Attached to 0000:00:12.0 00:13:55.046 0000:00:13.0: get features failed as expected 00:13:55.046 0000:00:12.0: get features failed as expected 00:13:55.046 0000:00:10.0: get features failed as expected 00:13:55.046 0000:00:11.0: get features failed as expected 00:13:55.046 0000:00:10.0: get features successfully as expected 00:13:55.046 0000:00:11.0: get features successfully as expected 00:13:55.046 0000:00:13.0: get features successfully as expected 00:13:55.046 0000:00:12.0: get features successfully as expected 00:13:55.046 0000:00:10.0: read failed as expected 00:13:55.046 0000:00:11.0: read failed as expected 00:13:55.046 0000:00:13.0: read failed as expected 00:13:55.046 0000:00:12.0: read failed as expected 00:13:55.046 0000:00:10.0: read successfully as expected 00:13:55.046 0000:00:11.0: read successfully as expected 00:13:55.046 0000:00:13.0: read successfully as expected 00:13:55.046 0000:00:12.0: read successfully as expected 00:13:55.046 Cleaning up... 00:13:55.047 00:13:55.047 real 0m0.373s 00:13:55.047 user 0m0.163s 00:13:55.047 sys 0m0.160s 00:13:55.047 03:42:09 nvme.nvme_err_injection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:55.047 03:42:09 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:13:55.047 ************************************ 00:13:55.047 END TEST nvme_err_injection 00:13:55.047 ************************************ 00:13:55.047 03:42:09 nvme -- common/autotest_common.sh@1142 -- # return 0 00:13:55.047 03:42:09 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:13:55.047 03:42:09 nvme -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:13:55.047 03:42:09 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:55.047 03:42:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:55.047 ************************************ 00:13:55.047 START TEST nvme_overhead 00:13:55.047 ************************************ 00:13:55.047 03:42:09 nvme.nvme_overhead -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:13:56.421 Initializing NVMe Controllers 00:13:56.421 Attached to 0000:00:10.0 00:13:56.421 Attached to 0000:00:11.0 00:13:56.421 Attached to 0000:00:13.0 00:13:56.421 Attached to 0000:00:12.0 00:13:56.422 Initialization complete. Launching workers. 00:13:56.422 submit (in ns) avg, min, max = 28823.3, 14320.0, 191922.7 00:13:56.422 complete (in ns) avg, min, max = 19788.8, 9532.7, 206037.3 00:13:56.422 00:13:56.422 Submit histogram 00:13:56.422 ================ 00:13:56.422 Range in us Cumulative Count 00:13:56.422 14.313 - 14.371: 0.0300% ( 3) 00:13:56.422 14.371 - 14.429: 0.0399% ( 1) 00:13:56.422 14.429 - 14.487: 0.0899% ( 5) 00:13:56.422 14.487 - 14.545: 0.1897% ( 10) 00:13:56.422 14.545 - 14.604: 0.2896% ( 10) 00:13:56.422 14.604 - 14.662: 0.3595% ( 7) 00:13:56.422 14.662 - 14.720: 0.3894% ( 3) 00:13:56.422 14.778 - 14.836: 0.4094% ( 2) 00:13:56.422 14.895 - 15.011: 0.4194% ( 1) 00:13:56.422 15.011 - 15.127: 0.4393% ( 2) 00:13:56.422 15.127 - 15.244: 0.4493% ( 1) 00:13:56.422 15.244 - 15.360: 0.4593% ( 1) 00:13:56.422 15.360 - 15.476: 0.4693% ( 1) 00:13:56.422 15.476 - 15.593: 0.4793% ( 1) 00:13:56.422 19.433 - 19.549: 0.4893% ( 1) 00:13:56.422 20.015 - 20.131: 0.5092% ( 2) 00:13:56.422 20.131 - 20.247: 0.5192% ( 1) 00:13:56.422 20.247 - 20.364: 0.5292% ( 1) 00:13:56.422 20.364 - 20.480: 0.5392% ( 1) 00:13:56.422 20.480 - 20.596: 0.5791% ( 4) 00:13:56.422 20.596 - 20.713: 0.5991% ( 2) 00:13:56.422 20.713 - 20.829: 0.6590% ( 6) 00:13:56.422 20.829 - 20.945: 0.7289% ( 7) 00:13:56.422 20.945 - 21.062: 0.7489% ( 2) 00:13:56.422 21.062 - 21.178: 0.7988% ( 5) 00:13:56.422 21.178 - 21.295: 0.8587% ( 6) 00:13:56.422 21.295 - 21.411: 0.9586% ( 10) 00:13:56.422 21.411 - 21.527: 1.0484% ( 9) 00:13:56.422 21.527 - 21.644: 1.1083% ( 6) 00:13:56.422 21.644 - 21.760: 1.1682% ( 6) 00:13:56.422 21.760 - 21.876: 1.2681% ( 10) 00:13:56.422 21.876 - 21.993: 1.3480% ( 8) 00:13:56.422 21.993 - 22.109: 1.3879% ( 4) 00:13:56.422 22.109 - 22.225: 1.4878% ( 10) 00:13:56.422 22.225 - 22.342: 1.5577% ( 7) 00:13:56.422 22.342 - 22.458: 1.7074% ( 15) 00:13:56.422 22.458 - 22.575: 1.7673% ( 6) 00:13:56.422 22.575 - 22.691: 1.8572% ( 9) 00:13:56.422 22.691 - 22.807: 1.8772% ( 2) 00:13:56.422 22.807 - 22.924: 1.9271% ( 5) 00:13:56.422 22.924 - 23.040: 2.0170% ( 9) 00:13:56.422 23.040 - 23.156: 2.1168% ( 10) 00:13:56.422 23.156 - 23.273: 2.1767% ( 6) 00:13:56.422 23.273 - 23.389: 2.1967% ( 2) 00:13:56.422 23.389 - 23.505: 2.2766% ( 8) 00:13:56.422 23.505 - 23.622: 2.3365% ( 6) 00:13:56.422 23.622 - 23.738: 2.4663% ( 13) 00:13:56.422 23.738 - 23.855: 2.5861% ( 12) 00:13:56.422 23.855 - 23.971: 2.6960% ( 11) 00:13:56.422 23.971 - 24.087: 2.8857% ( 19) 00:13:56.422 24.087 - 24.204: 3.0055% ( 12) 00:13:56.422 24.204 - 24.320: 3.1852% ( 18) 00:13:56.422 24.320 - 24.436: 3.4548% ( 27) 00:13:56.422 24.436 - 24.553: 3.6845% ( 23) 00:13:56.422 24.553 - 24.669: 4.0339% ( 35) 00:13:56.422 24.669 - 24.785: 4.4034% ( 37) 00:13:56.422 24.785 - 24.902: 4.8527% ( 45) 00:13:56.422 24.902 - 25.018: 5.4518% ( 60) 00:13:56.422 25.018 - 25.135: 6.0010% ( 55) 00:13:56.422 25.135 - 25.251: 6.7299% ( 73) 00:13:56.422 25.251 - 25.367: 7.4688% ( 74) 00:13:56.422 25.367 - 25.484: 8.2976% ( 83) 00:13:56.422 25.484 - 25.600: 9.4858% ( 119) 00:13:56.422 25.600 - 25.716: 10.4343% ( 95) 00:13:56.422 25.716 - 25.833: 11.5127% ( 108) 00:13:56.422 25.833 - 25.949: 12.8607% ( 135) 00:13:56.422 25.949 - 26.065: 14.3485% ( 149) 00:13:56.422 26.065 - 26.182: 15.7164% ( 137) 00:13:56.422 26.182 - 26.298: 17.1143% ( 140) 00:13:56.422 26.298 - 26.415: 18.6920% ( 158) 00:13:56.422 26.415 - 26.531: 20.5492% ( 186) 00:13:56.422 26.531 - 26.647: 22.3565% ( 181) 00:13:56.422 26.647 - 26.764: 24.2936% ( 194) 00:13:56.422 26.764 - 26.880: 26.1008% ( 181) 00:13:56.422 26.880 - 26.996: 27.9181% ( 182) 00:13:56.422 26.996 - 27.113: 30.0549% ( 214) 00:13:56.422 27.113 - 27.229: 32.0220% ( 197) 00:13:56.422 27.229 - 27.345: 33.9091% ( 189) 00:13:56.422 27.345 - 27.462: 35.8063% ( 190) 00:13:56.422 27.462 - 27.578: 37.8732% ( 207) 00:13:56.422 27.578 - 27.695: 39.8303% ( 196) 00:13:56.422 27.695 - 27.811: 41.8173% ( 199) 00:13:56.422 27.811 - 27.927: 43.6046% ( 179) 00:13:56.422 27.927 - 28.044: 45.6915% ( 209) 00:13:56.422 28.044 - 28.160: 47.4688% ( 178) 00:13:56.422 28.160 - 28.276: 49.7554% ( 229) 00:13:56.422 28.276 - 28.393: 51.6326% ( 188) 00:13:56.422 28.393 - 28.509: 53.5297% ( 190) 00:13:56.422 28.509 - 28.625: 55.6865% ( 216) 00:13:56.422 28.625 - 28.742: 57.3739% ( 169) 00:13:56.422 28.742 - 28.858: 59.1213% ( 175) 00:13:56.422 28.858 - 28.975: 60.8787% ( 176) 00:13:56.422 28.975 - 29.091: 62.6161% ( 174) 00:13:56.422 29.091 - 29.207: 64.4234% ( 181) 00:13:56.422 29.207 - 29.324: 65.9111% ( 149) 00:13:56.422 29.324 - 29.440: 67.6186% ( 171) 00:13:56.422 29.440 - 29.556: 69.1962% ( 158) 00:13:56.422 29.556 - 29.673: 70.7439% ( 155) 00:13:56.422 29.673 - 29.789: 71.9920% ( 125) 00:13:56.422 29.789 - 30.022: 74.6680% ( 268) 00:13:56.422 30.022 - 30.255: 77.3540% ( 269) 00:13:56.422 30.255 - 30.487: 79.7803% ( 243) 00:13:56.422 30.487 - 30.720: 82.0969% ( 232) 00:13:56.422 30.720 - 30.953: 84.1338% ( 204) 00:13:56.422 30.953 - 31.185: 85.9011% ( 177) 00:13:56.422 31.185 - 31.418: 87.3390% ( 144) 00:13:56.422 31.418 - 31.651: 88.7768% ( 144) 00:13:56.422 31.651 - 31.884: 90.0549% ( 128) 00:13:56.422 31.884 - 32.116: 91.0334% ( 98) 00:13:56.422 32.116 - 32.349: 91.9621% ( 93) 00:13:56.422 32.349 - 32.582: 92.6610% ( 70) 00:13:56.422 32.582 - 32.815: 93.2601% ( 60) 00:13:56.422 32.815 - 33.047: 93.6495% ( 39) 00:13:56.422 33.047 - 33.280: 93.9890% ( 34) 00:13:56.422 33.280 - 33.513: 94.2886% ( 30) 00:13:56.422 33.513 - 33.745: 94.5082% ( 22) 00:13:56.422 33.745 - 33.978: 94.7379% ( 23) 00:13:56.422 33.978 - 34.211: 94.8977% ( 16) 00:13:56.422 34.211 - 34.444: 95.0275% ( 13) 00:13:56.422 34.444 - 34.676: 95.1173% ( 9) 00:13:56.422 34.676 - 34.909: 95.2072% ( 9) 00:13:56.422 34.909 - 35.142: 95.2671% ( 6) 00:13:56.422 35.142 - 35.375: 95.3470% ( 8) 00:13:56.422 35.375 - 35.607: 95.4368% ( 9) 00:13:56.422 35.607 - 35.840: 95.4768% ( 4) 00:13:56.422 35.840 - 36.073: 95.5367% ( 6) 00:13:56.422 36.073 - 36.305: 95.6365% ( 10) 00:13:56.422 36.305 - 36.538: 95.7064% ( 7) 00:13:56.422 36.538 - 36.771: 95.8163% ( 11) 00:13:56.422 36.771 - 37.004: 95.9261% ( 11) 00:13:56.422 37.004 - 37.236: 96.0160% ( 9) 00:13:56.422 37.236 - 37.469: 96.1558% ( 14) 00:13:56.422 37.469 - 37.702: 96.2856% ( 13) 00:13:56.422 37.702 - 37.935: 96.4254% ( 14) 00:13:56.422 37.935 - 38.167: 96.5552% ( 13) 00:13:56.422 38.167 - 38.400: 96.6450% ( 9) 00:13:56.422 38.400 - 38.633: 96.8347% ( 19) 00:13:56.422 38.633 - 38.865: 96.9745% ( 14) 00:13:56.422 38.865 - 39.098: 97.0944% ( 12) 00:13:56.422 39.098 - 39.331: 97.2641% ( 17) 00:13:56.422 39.331 - 39.564: 97.3839% ( 12) 00:13:56.422 39.564 - 39.796: 97.5437% ( 16) 00:13:56.422 39.796 - 40.029: 97.6635% ( 12) 00:13:56.422 40.029 - 40.262: 97.7933% ( 13) 00:13:56.422 40.262 - 40.495: 97.9730% ( 18) 00:13:56.422 40.495 - 40.727: 98.1128% ( 14) 00:13:56.422 40.727 - 40.960: 98.2626% ( 15) 00:13:56.422 40.960 - 41.193: 98.4523% ( 19) 00:13:56.422 41.193 - 41.425: 98.5122% ( 6) 00:13:56.422 41.425 - 41.658: 98.6221% ( 11) 00:13:56.422 41.658 - 41.891: 98.6920% ( 7) 00:13:56.422 41.891 - 42.124: 98.8318% ( 14) 00:13:56.422 42.124 - 42.356: 98.8717% ( 4) 00:13:56.422 42.356 - 42.589: 98.9316% ( 6) 00:13:56.422 42.589 - 42.822: 98.9915% ( 6) 00:13:56.422 42.822 - 43.055: 99.1113% ( 12) 00:13:56.422 43.055 - 43.287: 99.1812% ( 7) 00:13:56.422 43.287 - 43.520: 99.2611% ( 8) 00:13:56.422 43.520 - 43.753: 99.3210% ( 6) 00:13:56.422 43.753 - 43.985: 99.4009% ( 8) 00:13:56.422 43.985 - 44.218: 99.4708% ( 7) 00:13:56.422 44.218 - 44.451: 99.5007% ( 3) 00:13:56.422 44.451 - 44.684: 99.5407% ( 4) 00:13:56.422 44.684 - 44.916: 99.5806% ( 4) 00:13:56.422 44.916 - 45.149: 99.5906% ( 1) 00:13:56.422 45.149 - 45.382: 99.6405% ( 5) 00:13:56.422 45.382 - 45.615: 99.6505% ( 1) 00:13:56.422 45.615 - 45.847: 99.6705% ( 2) 00:13:56.422 45.847 - 46.080: 99.6905% ( 2) 00:13:56.422 46.080 - 46.313: 99.7004% ( 1) 00:13:56.422 47.011 - 47.244: 99.7104% ( 1) 00:13:56.422 47.244 - 47.476: 99.7304% ( 2) 00:13:56.422 47.709 - 47.942: 99.7404% ( 1) 00:13:56.422 47.942 - 48.175: 99.7504% ( 1) 00:13:56.422 48.175 - 48.407: 99.7703% ( 2) 00:13:56.423 48.407 - 48.640: 99.7903% ( 2) 00:13:56.423 48.640 - 48.873: 99.8003% ( 1) 00:13:56.423 49.105 - 49.338: 99.8203% ( 2) 00:13:56.423 50.036 - 50.269: 99.8303% ( 1) 00:13:56.423 50.269 - 50.502: 99.8402% ( 1) 00:13:56.423 50.967 - 51.200: 99.8502% ( 1) 00:13:56.423 51.200 - 51.433: 99.8602% ( 1) 00:13:56.423 51.433 - 51.665: 99.8702% ( 1) 00:13:56.423 51.898 - 52.131: 99.8802% ( 1) 00:13:56.423 53.295 - 53.527: 99.8902% ( 1) 00:13:56.423 55.622 - 55.855: 99.9001% ( 1) 00:13:56.423 56.553 - 56.785: 99.9101% ( 1) 00:13:56.423 57.018 - 57.251: 99.9201% ( 1) 00:13:56.423 58.647 - 58.880: 99.9301% ( 1) 00:13:56.423 60.509 - 60.975: 99.9401% ( 1) 00:13:56.423 63.302 - 63.767: 99.9501% ( 1) 00:13:56.423 68.422 - 68.887: 99.9601% ( 1) 00:13:56.423 100.538 - 101.004: 99.9700% ( 1) 00:13:56.423 131.258 - 132.189: 99.9800% ( 1) 00:13:56.423 175.011 - 175.942: 99.9900% ( 1) 00:13:56.423 191.767 - 192.698: 100.0000% ( 1) 00:13:56.423 00:13:56.423 Complete histogram 00:13:56.423 ================== 00:13:56.423 Range in us Cumulative Count 00:13:56.423 9.484 - 9.542: 0.0100% ( 1) 00:13:56.423 9.542 - 9.600: 0.0399% ( 3) 00:13:56.423 9.600 - 9.658: 0.1298% ( 9) 00:13:56.423 9.658 - 9.716: 0.2396% ( 11) 00:13:56.423 9.716 - 9.775: 0.3595% ( 12) 00:13:56.423 9.775 - 9.833: 0.3894% ( 3) 00:13:56.423 9.833 - 9.891: 0.4094% ( 2) 00:13:56.423 9.891 - 9.949: 0.4194% ( 1) 00:13:56.423 9.949 - 10.007: 0.4393% ( 2) 00:13:56.423 10.007 - 10.065: 0.4593% ( 2) 00:13:56.423 10.298 - 10.356: 0.4693% ( 1) 00:13:56.423 10.764 - 10.822: 0.4793% ( 1) 00:13:56.423 10.822 - 10.880: 0.4893% ( 1) 00:13:56.423 11.113 - 11.171: 0.4993% ( 1) 00:13:56.423 13.324 - 13.382: 0.5192% ( 2) 00:13:56.423 13.382 - 13.440: 0.5292% ( 1) 00:13:56.423 13.440 - 13.498: 0.5492% ( 2) 00:13:56.423 13.615 - 13.673: 0.5791% ( 3) 00:13:56.423 13.673 - 13.731: 0.5991% ( 2) 00:13:56.423 13.731 - 13.789: 0.6291% ( 3) 00:13:56.423 13.789 - 13.847: 0.6690% ( 4) 00:13:56.423 13.847 - 13.905: 0.6890% ( 2) 00:13:56.423 13.905 - 13.964: 0.7289% ( 4) 00:13:56.423 13.964 - 14.022: 0.7489% ( 2) 00:13:56.423 14.022 - 14.080: 0.7688% ( 2) 00:13:56.423 14.080 - 14.138: 0.8088% ( 4) 00:13:56.423 14.138 - 14.196: 0.8188% ( 1) 00:13:56.423 14.196 - 14.255: 0.8288% ( 1) 00:13:56.423 14.255 - 14.313: 0.8687% ( 4) 00:13:56.423 14.313 - 14.371: 0.9086% ( 4) 00:13:56.423 14.371 - 14.429: 0.9785% ( 7) 00:13:56.423 14.487 - 14.545: 1.0285% ( 5) 00:13:56.423 14.545 - 14.604: 1.0584% ( 3) 00:13:56.423 14.604 - 14.662: 1.0984% ( 4) 00:13:56.423 14.662 - 14.720: 1.1083% ( 1) 00:13:56.423 14.720 - 14.778: 1.1483% ( 4) 00:13:56.423 14.778 - 14.836: 1.2082% ( 6) 00:13:56.423 14.836 - 14.895: 1.2581% ( 5) 00:13:56.423 14.895 - 15.011: 1.3380% ( 8) 00:13:56.423 15.011 - 15.127: 1.4079% ( 7) 00:13:56.423 15.127 - 15.244: 1.5577% ( 15) 00:13:56.423 15.244 - 15.360: 1.6076% ( 5) 00:13:56.423 15.360 - 15.476: 1.6775% ( 7) 00:13:56.423 15.476 - 15.593: 1.7374% ( 6) 00:13:56.423 15.593 - 15.709: 1.8672% ( 13) 00:13:56.423 15.709 - 15.825: 1.9571% ( 9) 00:13:56.423 15.825 - 15.942: 2.0170% ( 6) 00:13:56.423 15.942 - 16.058: 2.1468% ( 13) 00:13:56.423 16.058 - 16.175: 2.3365% ( 19) 00:13:56.423 16.175 - 16.291: 2.6161% ( 28) 00:13:56.423 16.291 - 16.407: 2.8857% ( 27) 00:13:56.423 16.407 - 16.524: 3.3150% ( 43) 00:13:56.423 16.524 - 16.640: 3.7644% ( 45) 00:13:56.423 16.640 - 16.756: 4.4234% ( 66) 00:13:56.423 16.756 - 16.873: 5.4119% ( 99) 00:13:56.423 16.873 - 16.989: 6.3505% ( 94) 00:13:56.423 16.989 - 17.105: 7.6485% ( 130) 00:13:56.423 17.105 - 17.222: 8.9566% ( 131) 00:13:56.423 17.222 - 17.338: 10.5941% ( 164) 00:13:56.423 17.338 - 17.455: 12.1817% ( 159) 00:13:56.423 17.455 - 17.571: 14.3585% ( 218) 00:13:56.423 17.571 - 17.687: 16.3555% ( 200) 00:13:56.423 17.687 - 17.804: 18.4523% ( 210) 00:13:56.423 17.804 - 17.920: 20.6091% ( 216) 00:13:56.423 17.920 - 18.036: 23.0354% ( 243) 00:13:56.423 18.036 - 18.153: 25.2721% ( 224) 00:13:56.423 18.153 - 18.269: 27.7484% ( 248) 00:13:56.423 18.269 - 18.385: 30.1448% ( 240) 00:13:56.423 18.385 - 18.502: 32.3814% ( 224) 00:13:56.423 18.502 - 18.618: 35.0674% ( 269) 00:13:56.423 18.618 - 18.735: 37.4538% ( 239) 00:13:56.423 18.735 - 18.851: 39.9101% ( 246) 00:13:56.423 18.851 - 18.967: 42.3864% ( 248) 00:13:56.423 18.967 - 19.084: 44.8028% ( 242) 00:13:56.423 19.084 - 19.200: 47.2891% ( 249) 00:13:56.423 19.200 - 19.316: 49.3759% ( 209) 00:13:56.423 19.316 - 19.433: 51.8522% ( 248) 00:13:56.423 19.433 - 19.549: 54.3485% ( 250) 00:13:56.423 19.549 - 19.665: 56.7649% ( 242) 00:13:56.423 19.665 - 19.782: 59.1013% ( 234) 00:13:56.423 19.782 - 19.898: 61.2881% ( 219) 00:13:56.423 19.898 - 20.015: 63.2052% ( 192) 00:13:56.423 20.015 - 20.131: 65.1822% ( 198) 00:13:56.423 20.131 - 20.247: 67.4688% ( 229) 00:13:56.423 20.247 - 20.364: 69.5457% ( 208) 00:13:56.423 20.364 - 20.480: 71.3430% ( 180) 00:13:56.423 20.480 - 20.596: 73.3100% ( 197) 00:13:56.423 20.596 - 20.713: 75.0374% ( 173) 00:13:56.423 20.713 - 20.829: 76.7349% ( 170) 00:13:56.423 20.829 - 20.945: 78.3225% ( 159) 00:13:56.423 20.945 - 21.062: 79.6206% ( 130) 00:13:56.423 21.062 - 21.178: 81.1682% ( 155) 00:13:56.423 21.178 - 21.295: 82.3964% ( 123) 00:13:56.423 21.295 - 21.411: 83.5147% ( 112) 00:13:56.423 21.411 - 21.527: 84.6630% ( 115) 00:13:56.423 21.527 - 21.644: 86.0210% ( 136) 00:13:56.423 21.644 - 21.760: 87.1393% ( 112) 00:13:56.423 21.760 - 21.876: 88.0579% ( 92) 00:13:56.423 21.876 - 21.993: 88.7868% ( 73) 00:13:56.423 21.993 - 22.109: 89.6056% ( 82) 00:13:56.423 22.109 - 22.225: 90.2446% ( 64) 00:13:56.423 22.225 - 22.342: 91.0035% ( 76) 00:13:56.423 22.342 - 22.458: 91.8223% ( 82) 00:13:56.423 22.458 - 22.575: 92.4513% ( 63) 00:13:56.423 22.575 - 22.691: 92.9606% ( 51) 00:13:56.423 22.691 - 22.807: 93.3899% ( 43) 00:13:56.423 22.807 - 22.924: 93.8093% ( 42) 00:13:56.423 22.924 - 23.040: 94.1987% ( 39) 00:13:56.423 23.040 - 23.156: 94.5881% ( 39) 00:13:56.423 23.156 - 23.273: 94.9176% ( 33) 00:13:56.423 23.273 - 23.389: 95.2471% ( 33) 00:13:56.423 23.389 - 23.505: 95.4968% ( 25) 00:13:56.423 23.505 - 23.622: 95.6965% ( 20) 00:13:56.423 23.622 - 23.738: 95.9061% ( 21) 00:13:56.423 23.738 - 23.855: 96.0659% ( 16) 00:13:56.423 23.855 - 23.971: 96.2556% ( 19) 00:13:56.423 23.971 - 24.087: 96.3555% ( 10) 00:13:56.423 24.087 - 24.204: 96.4753% ( 12) 00:13:56.423 24.204 - 24.320: 96.5452% ( 7) 00:13:56.423 24.320 - 24.436: 96.5951% ( 5) 00:13:56.423 24.436 - 24.553: 96.6450% ( 5) 00:13:56.423 24.553 - 24.669: 96.6850% ( 4) 00:13:56.423 24.669 - 24.785: 96.7149% ( 3) 00:13:56.423 24.902 - 25.018: 96.7349% ( 2) 00:13:56.423 25.018 - 25.135: 96.7549% ( 2) 00:13:56.423 25.135 - 25.251: 96.7649% ( 1) 00:13:56.423 25.251 - 25.367: 96.7848% ( 2) 00:13:56.423 25.367 - 25.484: 96.7948% ( 1) 00:13:56.423 25.484 - 25.600: 96.8148% ( 2) 00:13:56.423 25.600 - 25.716: 96.8248% ( 1) 00:13:56.423 25.716 - 25.833: 96.8347% ( 1) 00:13:56.423 25.949 - 26.065: 96.8447% ( 1) 00:13:56.423 26.065 - 26.182: 96.8547% ( 1) 00:13:56.423 26.647 - 26.764: 96.8647% ( 1) 00:13:56.423 26.880 - 26.996: 96.8747% ( 1) 00:13:56.423 26.996 - 27.113: 96.8847% ( 1) 00:13:56.423 27.345 - 27.462: 96.8947% ( 1) 00:13:56.423 27.462 - 27.578: 96.9046% ( 1) 00:13:56.423 27.695 - 27.811: 96.9246% ( 2) 00:13:56.423 28.044 - 28.160: 96.9446% ( 2) 00:13:56.423 28.276 - 28.393: 96.9546% ( 1) 00:13:56.423 28.393 - 28.509: 96.9845% ( 3) 00:13:56.423 28.509 - 28.625: 97.0045% ( 2) 00:13:56.423 28.742 - 28.858: 97.0145% ( 1) 00:13:56.423 28.858 - 28.975: 97.0245% ( 1) 00:13:56.423 28.975 - 29.091: 97.0744% ( 5) 00:13:56.423 29.091 - 29.207: 97.1143% ( 4) 00:13:56.423 29.207 - 29.324: 97.1343% ( 2) 00:13:56.423 29.324 - 29.440: 97.1942% ( 6) 00:13:56.423 29.440 - 29.556: 97.2741% ( 8) 00:13:56.423 29.556 - 29.673: 97.3240% ( 5) 00:13:56.423 29.673 - 29.789: 97.3440% ( 2) 00:13:56.423 29.789 - 30.022: 97.4538% ( 11) 00:13:56.423 30.022 - 30.255: 97.5237% ( 7) 00:13:56.423 30.255 - 30.487: 97.6535% ( 13) 00:13:56.423 30.487 - 30.720: 97.7733% ( 12) 00:13:56.423 30.720 - 30.953: 97.8932% ( 12) 00:13:56.423 30.953 - 31.185: 98.0429% ( 15) 00:13:56.423 31.185 - 31.418: 98.1727% ( 13) 00:13:56.423 31.418 - 31.651: 98.3325% ( 16) 00:13:56.423 31.651 - 31.884: 98.4823% ( 15) 00:13:56.424 31.884 - 32.116: 98.6620% ( 18) 00:13:56.424 32.116 - 32.349: 98.7619% ( 10) 00:13:56.424 32.349 - 32.582: 98.9116% ( 15) 00:13:56.424 32.582 - 32.815: 99.0215% ( 11) 00:13:56.424 32.815 - 33.047: 99.0914% ( 7) 00:13:56.424 33.047 - 33.280: 99.2511% ( 16) 00:13:56.424 33.280 - 33.513: 99.3010% ( 5) 00:13:56.424 33.513 - 33.745: 99.3709% ( 7) 00:13:56.424 33.745 - 33.978: 99.4608% ( 9) 00:13:56.424 33.978 - 34.211: 99.5007% ( 4) 00:13:56.424 34.211 - 34.444: 99.5607% ( 6) 00:13:56.424 34.444 - 34.676: 99.5906% ( 3) 00:13:56.424 34.676 - 34.909: 99.6306% ( 4) 00:13:56.424 34.909 - 35.142: 99.6905% ( 6) 00:13:56.424 35.142 - 35.375: 99.7104% ( 2) 00:13:56.424 35.375 - 35.607: 99.7404% ( 3) 00:13:56.424 35.607 - 35.840: 99.7504% ( 1) 00:13:56.424 35.840 - 36.073: 99.8003% ( 5) 00:13:56.424 36.305 - 36.538: 99.8203% ( 2) 00:13:56.424 36.538 - 36.771: 99.8303% ( 1) 00:13:56.424 36.771 - 37.004: 99.8402% ( 1) 00:13:56.424 37.004 - 37.236: 99.8502% ( 1) 00:13:56.424 37.236 - 37.469: 99.8602% ( 1) 00:13:56.424 37.702 - 37.935: 99.8702% ( 1) 00:13:56.424 37.935 - 38.167: 99.8802% ( 1) 00:13:56.424 38.400 - 38.633: 99.8902% ( 1) 00:13:56.424 39.564 - 39.796: 99.9001% ( 1) 00:13:56.424 40.029 - 40.262: 99.9101% ( 1) 00:13:56.424 40.960 - 41.193: 99.9201% ( 1) 00:13:56.424 43.055 - 43.287: 99.9301% ( 1) 00:13:56.424 45.149 - 45.382: 99.9401% ( 1) 00:13:56.424 49.804 - 50.036: 99.9501% ( 1) 00:13:56.424 55.389 - 55.622: 99.9601% ( 1) 00:13:56.424 67.025 - 67.491: 99.9700% ( 1) 00:13:56.424 101.935 - 102.400: 99.9800% ( 1) 00:13:56.424 122.880 - 123.811: 99.9900% ( 1) 00:13:56.424 205.731 - 206.662: 100.0000% ( 1) 00:13:56.424 00:13:56.424 00:13:56.424 real 0m1.367s 00:13:56.424 user 0m1.150s 00:13:56.424 sys 0m0.158s 00:13:56.424 03:42:11 nvme.nvme_overhead -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:56.424 03:42:11 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:13:56.424 ************************************ 00:13:56.424 END TEST nvme_overhead 00:13:56.424 ************************************ 00:13:56.424 03:42:11 nvme -- common/autotest_common.sh@1142 -- # return 0 00:13:56.424 03:42:11 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:13:56.424 03:42:11 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:13:56.424 03:42:11 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:56.424 03:42:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:56.424 ************************************ 00:13:56.424 START TEST nvme_arbitration 00:13:56.424 ************************************ 00:13:56.424 03:42:11 nvme.nvme_arbitration -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:14:00.616 Initializing NVMe Controllers 00:14:00.616 Attached to 0000:00:10.0 00:14:00.616 Attached to 0000:00:11.0 00:14:00.616 Attached to 0000:00:13.0 00:14:00.616 Attached to 0000:00:12.0 00:14:00.616 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:14:00.616 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:14:00.616 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:14:00.616 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:14:00.616 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:14:00.616 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:14:00.616 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:14:00.616 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:14:00.616 Initialization complete. Launching workers. 00:14:00.616 Starting thread on core 1 with urgent priority queue 00:14:00.616 Starting thread on core 2 with urgent priority queue 00:14:00.616 Starting thread on core 3 with urgent priority queue 00:14:00.616 Starting thread on core 0 with urgent priority queue 00:14:00.616 QEMU NVMe Ctrl (12340 ) core 0: 512.00 IO/s 195.31 secs/100000 ios 00:14:00.616 QEMU NVMe Ctrl (12342 ) core 0: 512.00 IO/s 195.31 secs/100000 ios 00:14:00.616 QEMU NVMe Ctrl (12341 ) core 1: 490.67 IO/s 203.80 secs/100000 ios 00:14:00.616 QEMU NVMe Ctrl (12342 ) core 1: 490.67 IO/s 203.80 secs/100000 ios 00:14:00.616 QEMU NVMe Ctrl (12343 ) core 2: 469.33 IO/s 213.07 secs/100000 ios 00:14:00.616 QEMU NVMe Ctrl (12342 ) core 3: 469.33 IO/s 213.07 secs/100000 ios 00:14:00.616 ======================================================== 00:14:00.616 00:14:00.616 00:14:00.616 real 0m3.494s 00:14:00.616 user 0m9.404s 00:14:00.616 sys 0m0.179s 00:14:00.616 03:42:14 nvme.nvme_arbitration -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:00.616 03:42:14 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:14:00.616 ************************************ 00:14:00.616 END TEST nvme_arbitration 00:14:00.616 ************************************ 00:14:00.616 03:42:14 nvme -- common/autotest_common.sh@1142 -- # return 0 00:14:00.616 03:42:14 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:14:00.616 03:42:14 nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:00.616 03:42:14 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:00.616 03:42:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:00.616 ************************************ 00:14:00.616 START TEST nvme_single_aen 00:14:00.616 ************************************ 00:14:00.616 03:42:14 nvme.nvme_single_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:14:00.616 Asynchronous Event Request test 00:14:00.616 Attached to 0000:00:10.0 00:14:00.616 Attached to 0000:00:11.0 00:14:00.616 Attached to 0000:00:13.0 00:14:00.616 Attached to 0000:00:12.0 00:14:00.616 Reset controller to setup AER completions for this process 00:14:00.616 Registering asynchronous event callbacks... 00:14:00.616 Getting orig temperature thresholds of all controllers 00:14:00.616 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:00.616 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:00.616 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:00.616 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:00.616 Setting all controllers temperature threshold low to trigger AER 00:14:00.616 Waiting for all controllers temperature threshold to be set lower 00:14:00.616 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:00.616 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:14:00.616 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:00.616 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:14:00.616 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:00.616 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:14:00.616 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:00.616 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:14:00.616 Waiting for all controllers to trigger AER and reset threshold 00:14:00.616 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:00.616 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:00.616 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:00.616 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:00.616 Cleaning up... 00:14:00.616 00:14:00.616 real 0m0.355s 00:14:00.616 user 0m0.137s 00:14:00.616 sys 0m0.173s 00:14:00.616 03:42:15 nvme.nvme_single_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:00.616 03:42:15 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:14:00.616 ************************************ 00:14:00.616 END TEST nvme_single_aen 00:14:00.616 ************************************ 00:14:00.616 03:42:15 nvme -- common/autotest_common.sh@1142 -- # return 0 00:14:00.616 03:42:15 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:14:00.616 03:42:15 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:00.616 03:42:15 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:00.616 03:42:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:00.616 ************************************ 00:14:00.616 START TEST nvme_doorbell_aers 00:14:00.616 ************************************ 00:14:00.617 03:42:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1123 -- # nvme_doorbell_aers 00:14:00.617 03:42:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:14:00.617 03:42:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:14:00.617 03:42:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:14:00.617 03:42:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:14:00.617 03:42:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=() 00:14:00.617 03:42:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # local bdfs 00:14:00.617 03:42:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:00.617 03:42:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:00.617 03:42:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:14:00.617 03:42:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:14:00.617 03:42:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:00.617 03:42:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:14:00.617 03:42:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:00.876 [2024-07-26 03:42:15.563439] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70433) is not found. Dropping the request. 00:14:10.841 Executing: test_write_invalid_db 00:14:10.841 Waiting for AER completion... 00:14:10.841 Failure: test_write_invalid_db 00:14:10.841 00:14:10.841 Executing: test_invalid_db_write_overflow_sq 00:14:10.841 Waiting for AER completion... 00:14:10.841 Failure: test_invalid_db_write_overflow_sq 00:14:10.841 00:14:10.841 Executing: test_invalid_db_write_overflow_cq 00:14:10.841 Waiting for AER completion... 00:14:10.841 Failure: test_invalid_db_write_overflow_cq 00:14:10.841 00:14:10.841 03:42:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:14:10.841 03:42:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:14:10.841 [2024-07-26 03:42:25.642814] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70433) is not found. Dropping the request. 00:14:20.812 Executing: test_write_invalid_db 00:14:20.812 Waiting for AER completion... 00:14:20.812 Failure: test_write_invalid_db 00:14:20.812 00:14:20.812 Executing: test_invalid_db_write_overflow_sq 00:14:20.812 Waiting for AER completion... 00:14:20.812 Failure: test_invalid_db_write_overflow_sq 00:14:20.812 00:14:20.812 Executing: test_invalid_db_write_overflow_cq 00:14:20.812 Waiting for AER completion... 00:14:20.812 Failure: test_invalid_db_write_overflow_cq 00:14:20.812 00:14:20.812 03:42:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:14:20.812 03:42:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:14:20.812 [2024-07-26 03:42:35.653436] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70433) is not found. Dropping the request. 00:14:30.782 Executing: test_write_invalid_db 00:14:30.782 Waiting for AER completion... 00:14:30.782 Failure: test_write_invalid_db 00:14:30.782 00:14:30.782 Executing: test_invalid_db_write_overflow_sq 00:14:30.782 Waiting for AER completion... 00:14:30.782 Failure: test_invalid_db_write_overflow_sq 00:14:30.782 00:14:30.782 Executing: test_invalid_db_write_overflow_cq 00:14:30.782 Waiting for AER completion... 00:14:30.782 Failure: test_invalid_db_write_overflow_cq 00:14:30.782 00:14:30.782 03:42:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:14:30.782 03:42:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:14:31.040 [2024-07-26 03:42:45.737637] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70433) is not found. Dropping the request. 00:14:41.008 Executing: test_write_invalid_db 00:14:41.008 Waiting for AER completion... 00:14:41.008 Failure: test_write_invalid_db 00:14:41.008 00:14:41.008 Executing: test_invalid_db_write_overflow_sq 00:14:41.008 Waiting for AER completion... 00:14:41.008 Failure: test_invalid_db_write_overflow_sq 00:14:41.008 00:14:41.008 Executing: test_invalid_db_write_overflow_cq 00:14:41.008 Waiting for AER completion... 00:14:41.008 Failure: test_invalid_db_write_overflow_cq 00:14:41.008 00:14:41.008 00:14:41.008 real 0m40.241s 00:14:41.008 user 0m33.890s 00:14:41.008 sys 0m5.916s 00:14:41.008 03:42:55 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:41.008 03:42:55 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:14:41.008 ************************************ 00:14:41.008 END TEST nvme_doorbell_aers 00:14:41.008 ************************************ 00:14:41.008 03:42:55 nvme -- common/autotest_common.sh@1142 -- # return 0 00:14:41.008 03:42:55 nvme -- nvme/nvme.sh@97 -- # uname 00:14:41.008 03:42:55 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:14:41.008 03:42:55 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:14:41.008 03:42:55 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:14:41.008 03:42:55 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:41.008 03:42:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:41.008 ************************************ 00:14:41.008 START TEST nvme_multi_aen 00:14:41.008 ************************************ 00:14:41.008 03:42:55 nvme.nvme_multi_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:14:41.008 [2024-07-26 03:42:55.792586] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70433) is not found. Dropping the request. 00:14:41.008 [2024-07-26 03:42:55.792739] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70433) is not found. Dropping the request. 00:14:41.008 [2024-07-26 03:42:55.792783] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70433) is not found. Dropping the request. 00:14:41.008 [2024-07-26 03:42:55.794727] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70433) is not found. Dropping the request. 00:14:41.009 [2024-07-26 03:42:55.794795] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70433) is not found. Dropping the request. 00:14:41.009 [2024-07-26 03:42:55.794856] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70433) is not found. Dropping the request. 00:14:41.009 [2024-07-26 03:42:55.796324] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70433) is not found. Dropping the request. 00:14:41.009 [2024-07-26 03:42:55.796410] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70433) is not found. Dropping the request. 00:14:41.009 [2024-07-26 03:42:55.796460] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70433) is not found. Dropping the request. 00:14:41.009 [2024-07-26 03:42:55.797978] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70433) is not found. Dropping the request. 00:14:41.009 [2024-07-26 03:42:55.798037] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70433) is not found. Dropping the request. 00:14:41.009 [2024-07-26 03:42:55.798069] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70433) is not found. Dropping the request. 00:14:41.009 Child process pid: 70944 00:14:41.267 [Child] Asynchronous Event Request test 00:14:41.267 [Child] Attached to 0000:00:10.0 00:14:41.267 [Child] Attached to 0000:00:11.0 00:14:41.267 [Child] Attached to 0000:00:13.0 00:14:41.267 [Child] Attached to 0000:00:12.0 00:14:41.267 [Child] Registering asynchronous event callbacks... 00:14:41.267 [Child] Getting orig temperature thresholds of all controllers 00:14:41.267 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:41.267 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:41.267 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:41.267 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:41.267 [Child] Waiting for all controllers to trigger AER and reset threshold 00:14:41.267 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:41.267 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:41.267 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:41.267 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:41.267 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:41.267 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:41.267 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:41.267 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:41.267 [Child] Cleaning up... 00:14:41.267 Asynchronous Event Request test 00:14:41.267 Attached to 0000:00:10.0 00:14:41.267 Attached to 0000:00:11.0 00:14:41.267 Attached to 0000:00:13.0 00:14:41.267 Attached to 0000:00:12.0 00:14:41.267 Reset controller to setup AER completions for this process 00:14:41.267 Registering asynchronous event callbacks... 00:14:41.267 Getting orig temperature thresholds of all controllers 00:14:41.267 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:41.267 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:41.267 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:41.267 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:41.267 Setting all controllers temperature threshold low to trigger AER 00:14:41.267 Waiting for all controllers temperature threshold to be set lower 00:14:41.267 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:41.267 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:14:41.267 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:41.267 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:14:41.267 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:41.267 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:14:41.267 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:41.267 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:14:41.267 Waiting for all controllers to trigger AER and reset threshold 00:14:41.267 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:41.268 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:41.268 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:41.268 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:41.268 Cleaning up... 00:14:41.268 00:14:41.268 real 0m0.603s 00:14:41.268 user 0m0.206s 00:14:41.268 sys 0m0.287s 00:14:41.268 03:42:56 nvme.nvme_multi_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:41.268 03:42:56 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:14:41.268 ************************************ 00:14:41.268 END TEST nvme_multi_aen 00:14:41.268 ************************************ 00:14:41.268 03:42:56 nvme -- common/autotest_common.sh@1142 -- # return 0 00:14:41.268 03:42:56 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:14:41.268 03:42:56 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:41.268 03:42:56 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:41.268 03:42:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:41.268 ************************************ 00:14:41.268 START TEST nvme_startup 00:14:41.268 ************************************ 00:14:41.268 03:42:56 nvme.nvme_startup -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:14:41.834 Initializing NVMe Controllers 00:14:41.834 Attached to 0000:00:10.0 00:14:41.834 Attached to 0000:00:11.0 00:14:41.835 Attached to 0000:00:13.0 00:14:41.835 Attached to 0000:00:12.0 00:14:41.835 Initialization complete. 00:14:41.835 Time used:219739.734 (us). 00:14:41.835 00:14:41.835 real 0m0.306s 00:14:41.835 user 0m0.105s 00:14:41.835 sys 0m0.145s 00:14:41.835 03:42:56 nvme.nvme_startup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:41.835 03:42:56 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:14:41.835 ************************************ 00:14:41.835 END TEST nvme_startup 00:14:41.835 ************************************ 00:14:41.835 03:42:56 nvme -- common/autotest_common.sh@1142 -- # return 0 00:14:41.835 03:42:56 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:14:41.835 03:42:56 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:41.835 03:42:56 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:41.835 03:42:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:41.835 ************************************ 00:14:41.835 START TEST nvme_multi_secondary 00:14:41.835 ************************************ 00:14:41.835 03:42:56 nvme.nvme_multi_secondary -- common/autotest_common.sh@1123 -- # nvme_multi_secondary 00:14:41.835 03:42:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=71000 00:14:41.835 03:42:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:14:41.835 03:42:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=71001 00:14:41.835 03:42:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:14:41.835 03:42:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:14:45.116 Initializing NVMe Controllers 00:14:45.116 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:45.116 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:45.116 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:45.116 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:45.116 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:14:45.116 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:14:45.116 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:14:45.116 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:14:45.116 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:14:45.116 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:14:45.116 Initialization complete. Launching workers. 00:14:45.117 ======================================================== 00:14:45.117 Latency(us) 00:14:45.117 Device Information : IOPS MiB/s Average min max 00:14:45.117 PCIE (0000:00:10.0) NSID 1 from core 2: 1997.42 7.80 8008.27 1370.95 20088.53 00:14:45.117 PCIE (0000:00:11.0) NSID 1 from core 2: 2002.75 7.82 7988.76 1395.73 19710.79 00:14:45.117 PCIE (0000:00:13.0) NSID 1 from core 2: 2002.75 7.82 7988.68 1354.62 18702.29 00:14:45.117 PCIE (0000:00:12.0) NSID 1 from core 2: 2008.08 7.84 7968.53 1408.57 19979.81 00:14:45.117 PCIE (0000:00:12.0) NSID 2 from core 2: 2002.75 7.82 7989.63 1399.34 20066.93 00:14:45.117 PCIE (0000:00:12.0) NSID 3 from core 2: 2002.75 7.82 7989.44 1404.10 16158.92 00:14:45.117 ======================================================== 00:14:45.117 Total : 12016.51 46.94 7988.87 1354.62 20088.53 00:14:45.117 00:14:45.117 03:42:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 71000 00:14:45.374 Initializing NVMe Controllers 00:14:45.374 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:45.374 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:45.374 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:45.374 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:45.374 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:14:45.374 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:14:45.374 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:14:45.374 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:14:45.374 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:14:45.374 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:14:45.374 Initialization complete. Launching workers. 00:14:45.374 ======================================================== 00:14:45.374 Latency(us) 00:14:45.374 Device Information : IOPS MiB/s Average min max 00:14:45.374 PCIE (0000:00:10.0) NSID 1 from core 1: 4440.85 17.35 3600.71 1555.99 12062.80 00:14:45.374 PCIE (0000:00:11.0) NSID 1 from core 1: 4440.85 17.35 3602.15 1318.28 12057.98 00:14:45.374 PCIE (0000:00:13.0) NSID 1 from core 1: 4440.85 17.35 3602.05 1463.35 12934.95 00:14:45.374 PCIE (0000:00:12.0) NSID 1 from core 1: 4440.85 17.35 3601.91 1438.70 13705.27 00:14:45.374 PCIE (0000:00:12.0) NSID 2 from core 1: 4440.85 17.35 3601.78 1513.16 11943.29 00:14:45.374 PCIE (0000:00:12.0) NSID 3 from core 1: 4440.85 17.35 3601.64 1518.13 11709.80 00:14:45.374 ======================================================== 00:14:45.374 Total : 26645.11 104.08 3601.71 1318.28 13705.27 00:14:45.374 00:14:47.278 Initializing NVMe Controllers 00:14:47.278 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:47.278 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:47.278 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:47.278 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:47.278 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:47.279 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:14:47.279 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:14:47.279 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:14:47.279 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:14:47.279 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:14:47.279 Initialization complete. Launching workers. 00:14:47.279 ======================================================== 00:14:47.279 Latency(us) 00:14:47.279 Device Information : IOPS MiB/s Average min max 00:14:47.279 PCIE (0000:00:10.0) NSID 1 from core 0: 7068.66 27.61 2261.87 937.80 13343.82 00:14:47.279 PCIE (0000:00:11.0) NSID 1 from core 0: 7068.66 27.61 2262.99 956.22 12996.88 00:14:47.279 PCIE (0000:00:13.0) NSID 1 from core 0: 7068.66 27.61 2262.94 872.08 13681.57 00:14:47.279 PCIE (0000:00:12.0) NSID 1 from core 0: 7071.86 27.62 2261.88 818.18 14371.27 00:14:47.279 PCIE (0000:00:12.0) NSID 2 from core 0: 7071.86 27.62 2261.86 968.74 13513.24 00:14:47.279 PCIE (0000:00:12.0) NSID 3 from core 0: 7068.66 27.61 2262.84 969.93 13636.98 00:14:47.279 ======================================================== 00:14:47.279 Total : 42418.37 165.70 2262.40 818.18 14371.27 00:14:47.279 00:14:47.279 03:43:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 71001 00:14:47.279 03:43:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=71065 00:14:47.279 03:43:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:14:47.279 03:43:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=71066 00:14:47.279 03:43:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:14:47.279 03:43:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:14:50.571 Initializing NVMe Controllers 00:14:50.571 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:50.571 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:50.571 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:50.571 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:50.571 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:50.571 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:14:50.571 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:14:50.571 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:14:50.571 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:14:50.571 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:14:50.571 Initialization complete. Launching workers. 00:14:50.571 ======================================================== 00:14:50.571 Latency(us) 00:14:50.571 Device Information : IOPS MiB/s Average min max 00:14:50.571 PCIE (0000:00:10.0) NSID 1 from core 0: 4661.45 18.21 3430.04 1241.22 11932.54 00:14:50.571 PCIE (0000:00:11.0) NSID 1 from core 0: 4661.45 18.21 3431.82 1251.03 11696.08 00:14:50.571 PCIE (0000:00:13.0) NSID 1 from core 0: 4661.45 18.21 3431.74 1329.62 11475.19 00:14:50.571 PCIE (0000:00:12.0) NSID 1 from core 0: 4661.45 18.21 3431.63 1333.86 11169.44 00:14:50.571 PCIE (0000:00:12.0) NSID 2 from core 0: 4661.45 18.21 3431.74 1301.60 11359.16 00:14:50.571 PCIE (0000:00:12.0) NSID 3 from core 0: 4661.45 18.21 3431.64 1293.03 11358.12 00:14:50.571 ======================================================== 00:14:50.571 Total : 27968.68 109.25 3431.43 1241.22 11932.54 00:14:50.571 00:14:50.571 Initializing NVMe Controllers 00:14:50.571 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:50.571 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:50.571 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:50.571 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:50.571 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:14:50.571 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:14:50.571 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:14:50.571 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:14:50.571 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:14:50.571 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:14:50.571 Initialization complete. Launching workers. 00:14:50.571 ======================================================== 00:14:50.571 Latency(us) 00:14:50.571 Device Information : IOPS MiB/s Average min max 00:14:50.571 PCIE (0000:00:10.0) NSID 1 from core 1: 4948.76 19.33 3231.03 993.60 9869.04 00:14:50.571 PCIE (0000:00:11.0) NSID 1 from core 1: 4948.76 19.33 3232.40 967.40 9807.79 00:14:50.571 PCIE (0000:00:13.0) NSID 1 from core 1: 4948.76 19.33 3232.26 1006.54 9904.97 00:14:50.571 PCIE (0000:00:12.0) NSID 1 from core 1: 4948.76 19.33 3232.12 1018.22 10044.35 00:14:50.571 PCIE (0000:00:12.0) NSID 2 from core 1: 4948.76 19.33 3231.97 1010.38 9092.48 00:14:50.571 PCIE (0000:00:12.0) NSID 3 from core 1: 4948.76 19.33 3231.81 1012.26 8746.10 00:14:50.571 ======================================================== 00:14:50.571 Total : 29692.57 115.99 3231.93 967.40 10044.35 00:14:50.571 00:14:53.099 Initializing NVMe Controllers 00:14:53.099 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:53.099 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:53.099 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:53.099 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:53.099 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:14:53.099 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:14:53.099 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:14:53.099 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:14:53.099 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:14:53.099 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:14:53.099 Initialization complete. Launching workers. 00:14:53.099 ======================================================== 00:14:53.099 Latency(us) 00:14:53.099 Device Information : IOPS MiB/s Average min max 00:14:53.099 PCIE (0000:00:10.0) NSID 1 from core 2: 3166.71 12.37 5048.55 979.64 19866.89 00:14:53.099 PCIE (0000:00:11.0) NSID 1 from core 2: 3166.71 12.37 5051.70 1013.69 19192.75 00:14:53.099 PCIE (0000:00:13.0) NSID 1 from core 2: 3166.71 12.37 5051.57 1005.95 18248.20 00:14:53.099 PCIE (0000:00:12.0) NSID 1 from core 2: 3166.71 12.37 5051.74 999.12 19444.48 00:14:53.099 PCIE (0000:00:12.0) NSID 2 from core 2: 3166.71 12.37 5051.64 956.62 18325.26 00:14:53.099 PCIE (0000:00:12.0) NSID 3 from core 2: 3166.71 12.37 5051.28 859.17 19414.48 00:14:53.099 ======================================================== 00:14:53.099 Total : 19000.26 74.22 5051.08 859.17 19866.89 00:14:53.099 00:14:53.099 03:43:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 71065 00:14:53.099 03:43:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 71066 00:14:53.099 00:14:53.099 real 0m11.134s 00:14:53.099 user 0m18.602s 00:14:53.099 sys 0m0.998s 00:14:53.099 03:43:07 nvme.nvme_multi_secondary -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:53.099 03:43:07 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:14:53.099 ************************************ 00:14:53.099 END TEST nvme_multi_secondary 00:14:53.099 ************************************ 00:14:53.099 03:43:07 nvme -- common/autotest_common.sh@1142 -- # return 0 00:14:53.099 03:43:07 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:14:53.099 03:43:07 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:14:53.099 03:43:07 nvme -- common/autotest_common.sh@1087 -- # [[ -e /proc/70003 ]] 00:14:53.099 03:43:07 nvme -- common/autotest_common.sh@1088 -- # kill 70003 00:14:53.099 03:43:07 nvme -- common/autotest_common.sh@1089 -- # wait 70003 00:14:53.099 [2024-07-26 03:43:07.688157] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70943) is not found. Dropping the request. 00:14:53.099 [2024-07-26 03:43:07.688242] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70943) is not found. Dropping the request. 00:14:53.099 [2024-07-26 03:43:07.688272] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70943) is not found. Dropping the request. 00:14:53.099 [2024-07-26 03:43:07.688308] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70943) is not found. Dropping the request. 00:14:53.099 [2024-07-26 03:43:07.690910] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70943) is not found. Dropping the request. 00:14:53.099 [2024-07-26 03:43:07.690972] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70943) is not found. Dropping the request. 00:14:53.099 [2024-07-26 03:43:07.690998] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70943) is not found. Dropping the request. 00:14:53.099 [2024-07-26 03:43:07.691051] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70943) is not found. Dropping the request. 00:14:53.099 [2024-07-26 03:43:07.693582] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70943) is not found. Dropping the request. 00:14:53.099 [2024-07-26 03:43:07.693646] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70943) is not found. Dropping the request. 00:14:53.099 [2024-07-26 03:43:07.693671] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70943) is not found. Dropping the request. 00:14:53.099 [2024-07-26 03:43:07.693696] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70943) is not found. Dropping the request. 00:14:53.100 [2024-07-26 03:43:07.696244] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70943) is not found. Dropping the request. 00:14:53.100 [2024-07-26 03:43:07.696307] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70943) is not found. Dropping the request. 00:14:53.100 [2024-07-26 03:43:07.696332] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70943) is not found. Dropping the request. 00:14:53.100 [2024-07-26 03:43:07.696356] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70943) is not found. Dropping the request. 00:14:53.100 [2024-07-26 03:43:07.974873] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:14:53.100 03:43:07 nvme -- common/autotest_common.sh@1091 -- # rm -f /var/run/spdk_stub0 00:14:53.100 03:43:07 nvme -- common/autotest_common.sh@1095 -- # echo 2 00:14:53.100 03:43:07 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:14:53.100 03:43:07 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:53.100 03:43:07 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:53.100 03:43:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:53.358 ************************************ 00:14:53.358 START TEST bdev_nvme_reset_stuck_adm_cmd 00:14:53.358 ************************************ 00:14:53.358 03:43:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:14:53.358 * Looking for test storage... 00:14:53.358 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:53.358 03:43:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:14:53.358 03:43:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:14:53.358 03:43:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:14:53.358 03:43:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:14:53.358 03:43:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:14:53.358 03:43:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:14:53.358 03:43:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=() 00:14:53.358 03:43:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # local bdfs 00:14:53.358 03:43:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:14:53.358 03:43:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:14:53.358 03:43:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=() 00:14:53.358 03:43:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # local bdfs 00:14:53.358 03:43:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:53.358 03:43:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:53.358 03:43:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:14:53.358 03:43:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:14:53.358 03:43:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:53.358 03:43:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:14:53.358 03:43:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:14:53.358 03:43:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:14:53.358 03:43:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=71225 00:14:53.358 03:43:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:14:53.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.358 03:43:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:53.359 03:43:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 71225 00:14:53.359 03:43:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@829 -- # '[' -z 71225 ']' 00:14:53.359 03:43:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.359 03:43:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:53.359 03:43:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.359 03:43:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:53.359 03:43:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:53.359 [2024-07-26 03:43:08.244793] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:14:53.359 [2024-07-26 03:43:08.244972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71225 ] 00:14:53.617 [2024-07-26 03:43:08.427241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:53.875 [2024-07-26 03:43:08.674591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.875 [2024-07-26 03:43:08.674677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:53.875 [2024-07-26 03:43:08.674760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.875 [2024-07-26 03:43:08.674768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:54.811 03:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:54.811 03:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # return 0 00:14:54.811 03:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:14:54.811 03:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.811 03:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:54.811 nvme0n1 00:14:54.811 03:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.811 03:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:14:54.811 03:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_gIkAC.txt 00:14:54.811 03:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:14:54.811 03:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.811 03:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:54.811 true 00:14:54.811 03:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.811 03:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:14:54.812 03:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1721965389 00:14:54.812 03:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=71255 00:14:54.812 03:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:14:54.812 03:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:54.812 03:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:14:56.712 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:14:56.712 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.712 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:56.712 [2024-07-26 03:43:11.504949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:14:56.712 [2024-07-26 03:43:11.505413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:56.712 [2024-07-26 03:43:11.505461] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:56.712 [2024-07-26 03:43:11.505487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.712 [2024-07-26 03:43:11.507450] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:56.712 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 71255 00:14:56.712 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.713 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 71255 00:14:56.713 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 71255 00:14:56.713 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:14:56.713 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:14:56.713 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:14:56.713 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.713 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:56.713 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.713 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:14:56.713 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_gIkAC.txt 00:14:56.713 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:14:56.713 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:14:56.713 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:14:56.713 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:14:56.713 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:14:56.713 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:14:56.713 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:14:56.713 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:14:56.713 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:14:56.713 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:14:56.713 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:14:56.713 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:14:56.713 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:14:56.713 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:14:56.713 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:14:56.713 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:14:56.713 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:14:56.713 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:14:56.713 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:14:56.713 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_gIkAC.txt 00:14:56.713 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 71225 00:14:56.713 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@948 -- # '[' -z 71225 ']' 00:14:56.713 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # kill -0 71225 00:14:56.713 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # uname 00:14:56.713 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:56.713 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71225 00:14:56.970 killing process with pid 71225 00:14:56.970 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:56.970 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:56.970 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71225' 00:14:56.970 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@967 -- # kill 71225 00:14:56.970 03:43:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # wait 71225 00:14:59.498 03:43:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:14:59.498 03:43:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:14:59.498 00:14:59.498 real 0m5.916s 00:14:59.498 user 0m20.590s 00:14:59.498 sys 0m0.602s 00:14:59.498 03:43:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:59.498 ************************************ 00:14:59.498 END TEST bdev_nvme_reset_stuck_adm_cmd 00:14:59.498 ************************************ 00:14:59.498 03:43:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:59.498 03:43:13 nvme -- common/autotest_common.sh@1142 -- # return 0 00:14:59.498 03:43:13 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:14:59.498 03:43:13 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:14:59.498 03:43:13 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:59.498 03:43:13 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:59.498 03:43:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:59.498 ************************************ 00:14:59.498 START TEST nvme_fio 00:14:59.498 ************************************ 00:14:59.498 03:43:13 nvme.nvme_fio -- common/autotest_common.sh@1123 -- # nvme_fio_test 00:14:59.498 03:43:13 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:14:59.498 03:43:13 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:14:59.498 03:43:13 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:14:59.498 03:43:13 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=() 00:14:59.498 03:43:13 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # local bdfs 00:14:59.498 03:43:13 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:59.498 03:43:13 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:59.498 03:43:13 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:14:59.498 03:43:14 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:14:59.498 03:43:14 nvme.nvme_fio -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:59.498 03:43:14 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:14:59.498 03:43:14 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:14:59.499 03:43:14 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:14:59.499 03:43:14 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:59.499 03:43:14 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:14:59.499 03:43:14 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:59.499 03:43:14 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:14:59.757 03:43:14 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:14:59.758 03:43:14 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:14:59.758 03:43:14 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:14:59.758 03:43:14 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:59.758 03:43:14 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:59.758 03:43:14 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:59.758 03:43:14 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:59.758 03:43:14 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:14:59.758 03:43:14 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:59.758 03:43:14 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:59.758 03:43:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:59.758 03:43:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:14:59.758 03:43:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:59.758 03:43:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:59.758 03:43:14 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:59.758 03:43:14 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:14:59.758 03:43:14 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:59.758 03:43:14 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:15:00.016 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:00.016 fio-3.35 00:15:00.016 Starting 1 thread 00:15:03.333 00:15:03.333 test: (groupid=0, jobs=1): err= 0: pid=71410: Fri Jul 26 03:43:17 2024 00:15:03.333 read: IOPS=14.8k, BW=57.9MiB/s (60.7MB/s)(116MiB/2001msec) 00:15:03.333 slat (usec): min=4, max=632, avg= 6.78, stdev= 5.35 00:15:03.333 clat (usec): min=334, max=10904, avg=4295.13, stdev=1073.91 00:15:03.333 lat (usec): min=339, max=10952, avg=4301.91, stdev=1075.31 00:15:03.333 clat percentiles (usec): 00:15:03.333 | 1.00th=[ 2507], 5.00th=[ 2999], 10.00th=[ 3359], 20.00th=[ 3556], 00:15:03.333 | 30.00th=[ 3687], 40.00th=[ 3785], 50.00th=[ 3949], 60.00th=[ 4293], 00:15:03.333 | 70.00th=[ 4555], 80.00th=[ 4948], 90.00th=[ 5932], 95.00th=[ 6390], 00:15:03.333 | 99.00th=[ 7832], 99.50th=[ 8029], 99.90th=[ 8848], 99.95th=[ 8979], 00:15:03.333 | 99.99th=[10814] 00:15:03.333 bw ( KiB/s): min=54912, max=64680, per=100.00%, avg=59984.00, stdev=4894.84, samples=3 00:15:03.333 iops : min=13728, max=16170, avg=14996.00, stdev=1223.71, samples=3 00:15:03.333 write: IOPS=14.8k, BW=57.9MiB/s (60.7MB/s)(116MiB/2001msec); 0 zone resets 00:15:03.333 slat (usec): min=4, max=234, avg= 6.97, stdev= 3.33 00:15:03.333 clat (usec): min=249, max=10716, avg=4307.50, stdev=1072.70 00:15:03.333 lat (usec): min=256, max=10730, avg=4314.47, stdev=1074.14 00:15:03.333 clat percentiles (usec): 00:15:03.333 | 1.00th=[ 2507], 5.00th=[ 3032], 10.00th=[ 3359], 20.00th=[ 3556], 00:15:03.333 | 30.00th=[ 3687], 40.00th=[ 3818], 50.00th=[ 3982], 60.00th=[ 4293], 00:15:03.333 | 70.00th=[ 4621], 80.00th=[ 4948], 90.00th=[ 5932], 95.00th=[ 6390], 00:15:03.333 | 99.00th=[ 7898], 99.50th=[ 8029], 99.90th=[ 8848], 99.95th=[ 9110], 00:15:03.333 | 99.99th=[10421] 00:15:03.333 bw ( KiB/s): min=54248, max=64024, per=100.00%, avg=59682.67, stdev=4978.86, samples=3 00:15:03.333 iops : min=13562, max=16006, avg=14920.67, stdev=1244.72, samples=3 00:15:03.333 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:15:03.333 lat (msec) : 2=0.14%, 4=51.26%, 10=48.53%, 20=0.03% 00:15:03.333 cpu : usr=98.15%, sys=0.30%, ctx=2, majf=0, minf=608 00:15:03.333 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:03.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.333 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:03.333 issued rwts: total=29657,29677,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:03.333 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:03.333 00:15:03.333 Run status group 0 (all jobs): 00:15:03.333 READ: bw=57.9MiB/s (60.7MB/s), 57.9MiB/s-57.9MiB/s (60.7MB/s-60.7MB/s), io=116MiB (121MB), run=2001-2001msec 00:15:03.333 WRITE: bw=57.9MiB/s (60.7MB/s), 57.9MiB/s-57.9MiB/s (60.7MB/s-60.7MB/s), io=116MiB (122MB), run=2001-2001msec 00:15:03.333 ----------------------------------------------------- 00:15:03.333 Suppressions used: 00:15:03.333 count bytes template 00:15:03.333 1 32 /usr/src/fio/parse.c 00:15:03.333 1 8 libtcmalloc_minimal.so 00:15:03.333 ----------------------------------------------------- 00:15:03.333 00:15:03.333 03:43:18 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:15:03.333 03:43:18 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:15:03.333 03:43:18 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:15:03.333 03:43:18 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:15:03.592 03:43:18 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:15:03.592 03:43:18 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:15:03.851 03:43:18 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:15:03.851 03:43:18 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:15:03.851 03:43:18 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:15:03.851 03:43:18 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:03.851 03:43:18 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:03.851 03:43:18 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:03.851 03:43:18 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:03.851 03:43:18 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:15:03.851 03:43:18 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:03.851 03:43:18 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:03.851 03:43:18 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:03.851 03:43:18 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:15:03.851 03:43:18 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:03.851 03:43:18 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:03.851 03:43:18 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:03.851 03:43:18 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:15:03.851 03:43:18 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:03.851 03:43:18 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:15:04.110 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:04.110 fio-3.35 00:15:04.110 Starting 1 thread 00:15:07.393 00:15:07.393 test: (groupid=0, jobs=1): err= 0: pid=71476: Fri Jul 26 03:43:21 2024 00:15:07.393 read: IOPS=14.2k, BW=55.6MiB/s (58.3MB/s)(111MiB/2001msec) 00:15:07.393 slat (nsec): min=4663, max=61597, avg=6742.19, stdev=2336.27 00:15:07.393 clat (usec): min=440, max=9447, avg=4481.59, stdev=923.14 00:15:07.393 lat (usec): min=449, max=9456, avg=4488.33, stdev=924.21 00:15:07.393 clat percentiles (usec): 00:15:07.393 | 1.00th=[ 2868], 5.00th=[ 3294], 10.00th=[ 3556], 20.00th=[ 3785], 00:15:07.393 | 30.00th=[ 3916], 40.00th=[ 4047], 50.00th=[ 4228], 60.00th=[ 4490], 00:15:07.393 | 70.00th=[ 4752], 80.00th=[ 5145], 90.00th=[ 5932], 95.00th=[ 6259], 00:15:07.393 | 99.00th=[ 6849], 99.50th=[ 7177], 99.90th=[ 8356], 99.95th=[ 8717], 00:15:07.393 | 99.99th=[ 9372] 00:15:07.393 bw ( KiB/s): min=57456, max=59560, per=100.00%, avg=58496.00, stdev=1052.21, samples=3 00:15:07.393 iops : min=14364, max=14890, avg=14624.00, stdev=263.05, samples=3 00:15:07.393 write: IOPS=14.2k, BW=55.6MiB/s (58.3MB/s)(111MiB/2001msec); 0 zone resets 00:15:07.393 slat (nsec): min=4791, max=44693, avg=7024.14, stdev=2432.22 00:15:07.393 clat (usec): min=391, max=9521, avg=4481.16, stdev=932.37 00:15:07.393 lat (usec): min=400, max=9531, avg=4488.19, stdev=933.44 00:15:07.393 clat percentiles (usec): 00:15:07.393 | 1.00th=[ 2802], 5.00th=[ 3294], 10.00th=[ 3556], 20.00th=[ 3785], 00:15:07.393 | 30.00th=[ 3916], 40.00th=[ 4047], 50.00th=[ 4228], 60.00th=[ 4490], 00:15:07.393 | 70.00th=[ 4752], 80.00th=[ 5145], 90.00th=[ 5932], 95.00th=[ 6259], 00:15:07.393 | 99.00th=[ 6849], 99.50th=[ 7373], 99.90th=[ 8455], 99.95th=[ 8586], 00:15:07.393 | 99.99th=[ 9241] 00:15:07.393 bw ( KiB/s): min=56624, max=59760, per=100.00%, avg=58376.00, stdev=1600.06, samples=3 00:15:07.393 iops : min=14156, max=14940, avg=14594.00, stdev=400.01, samples=3 00:15:07.393 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:15:07.393 lat (msec) : 2=0.09%, 4=36.87%, 10=63.02% 00:15:07.393 cpu : usr=98.95%, sys=0.00%, ctx=3, majf=0, minf=607 00:15:07.393 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:07.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:07.393 issued rwts: total=28467,28504,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.393 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:07.393 00:15:07.393 Run status group 0 (all jobs): 00:15:07.393 READ: bw=55.6MiB/s (58.3MB/s), 55.6MiB/s-55.6MiB/s (58.3MB/s-58.3MB/s), io=111MiB (117MB), run=2001-2001msec 00:15:07.393 WRITE: bw=55.6MiB/s (58.3MB/s), 55.6MiB/s-55.6MiB/s (58.3MB/s-58.3MB/s), io=111MiB (117MB), run=2001-2001msec 00:15:07.393 ----------------------------------------------------- 00:15:07.393 Suppressions used: 00:15:07.393 count bytes template 00:15:07.393 1 32 /usr/src/fio/parse.c 00:15:07.393 1 8 libtcmalloc_minimal.so 00:15:07.393 ----------------------------------------------------- 00:15:07.393 00:15:07.394 03:43:22 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:15:07.394 03:43:22 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:15:07.394 03:43:22 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:15:07.394 03:43:22 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:15:07.651 03:43:22 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:15:07.651 03:43:22 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:15:07.910 03:43:22 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:15:07.910 03:43:22 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:15:07.910 03:43:22 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:15:07.910 03:43:22 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:07.910 03:43:22 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:07.910 03:43:22 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:07.910 03:43:22 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:07.910 03:43:22 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:15:07.910 03:43:22 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:07.910 03:43:22 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:07.910 03:43:22 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:07.910 03:43:22 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:15:07.910 03:43:22 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:07.910 03:43:22 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:07.910 03:43:22 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:07.910 03:43:22 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:15:07.910 03:43:22 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:07.910 03:43:22 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:15:08.168 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:08.168 fio-3.35 00:15:08.168 Starting 1 thread 00:15:11.453 00:15:11.453 test: (groupid=0, jobs=1): err= 0: pid=71537: Fri Jul 26 03:43:25 2024 00:15:11.453 read: IOPS=13.6k, BW=53.3MiB/s (55.9MB/s)(107MiB/2001msec) 00:15:11.453 slat (nsec): min=4671, max=65164, avg=7126.85, stdev=2705.55 00:15:11.453 clat (usec): min=364, max=9078, avg=4673.27, stdev=995.27 00:15:11.453 lat (usec): min=371, max=9144, avg=4680.40, stdev=996.53 00:15:11.453 clat percentiles (usec): 00:15:11.453 | 1.00th=[ 2802], 5.00th=[ 3392], 10.00th=[ 3556], 20.00th=[ 3785], 00:15:11.453 | 30.00th=[ 4047], 40.00th=[ 4359], 50.00th=[ 4555], 60.00th=[ 4752], 00:15:11.453 | 70.00th=[ 5014], 80.00th=[ 5473], 90.00th=[ 6128], 95.00th=[ 6652], 00:15:11.453 | 99.00th=[ 7242], 99.50th=[ 7504], 99.90th=[ 7963], 99.95th=[ 8160], 00:15:11.453 | 99.99th=[ 8979] 00:15:11.453 bw ( KiB/s): min=50832, max=58832, per=100.00%, avg=55813.33, stdev=4346.15, samples=3 00:15:11.453 iops : min=12708, max=14708, avg=13953.33, stdev=1086.54, samples=3 00:15:11.453 write: IOPS=13.6k, BW=53.2MiB/s (55.8MB/s)(107MiB/2001msec); 0 zone resets 00:15:11.453 slat (nsec): min=4775, max=45613, avg=7299.92, stdev=2689.68 00:15:11.453 clat (usec): min=289, max=8873, avg=4680.28, stdev=998.04 00:15:11.453 lat (usec): min=296, max=8887, avg=4687.58, stdev=999.31 00:15:11.453 clat percentiles (usec): 00:15:11.453 | 1.00th=[ 2737], 5.00th=[ 3392], 10.00th=[ 3556], 20.00th=[ 3785], 00:15:11.453 | 30.00th=[ 4047], 40.00th=[ 4359], 50.00th=[ 4555], 60.00th=[ 4752], 00:15:11.453 | 70.00th=[ 5080], 80.00th=[ 5473], 90.00th=[ 6128], 95.00th=[ 6652], 00:15:11.453 | 99.00th=[ 7242], 99.50th=[ 7504], 99.90th=[ 7963], 99.95th=[ 8160], 00:15:11.453 | 99.99th=[ 8586] 00:15:11.453 bw ( KiB/s): min=51112, max=58664, per=100.00%, avg=55848.00, stdev=4125.89, samples=3 00:15:11.453 iops : min=12778, max=14666, avg=13962.00, stdev=1031.47, samples=3 00:15:11.453 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:15:11.453 lat (msec) : 2=0.12%, 4=28.61%, 10=71.23% 00:15:11.453 cpu : usr=98.25%, sys=0.35%, ctx=3, majf=0, minf=608 00:15:11.453 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:11.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:11.453 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:11.453 issued rwts: total=27307,27278,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:11.453 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:11.453 00:15:11.453 Run status group 0 (all jobs): 00:15:11.453 READ: bw=53.3MiB/s (55.9MB/s), 53.3MiB/s-53.3MiB/s (55.9MB/s-55.9MB/s), io=107MiB (112MB), run=2001-2001msec 00:15:11.454 WRITE: bw=53.2MiB/s (55.8MB/s), 53.2MiB/s-53.2MiB/s (55.8MB/s-55.8MB/s), io=107MiB (112MB), run=2001-2001msec 00:15:11.454 ----------------------------------------------------- 00:15:11.454 Suppressions used: 00:15:11.454 count bytes template 00:15:11.454 1 32 /usr/src/fio/parse.c 00:15:11.454 1 8 libtcmalloc_minimal.so 00:15:11.454 ----------------------------------------------------- 00:15:11.454 00:15:11.454 03:43:26 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:15:11.454 03:43:26 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:15:11.454 03:43:26 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:15:11.454 03:43:26 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:15:11.454 03:43:26 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:15:11.454 03:43:26 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:15:11.713 03:43:26 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:15:11.713 03:43:26 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:15:11.713 03:43:26 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:15:11.713 03:43:26 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:11.713 03:43:26 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:11.713 03:43:26 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:11.713 03:43:26 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:11.713 03:43:26 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:15:11.713 03:43:26 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:11.713 03:43:26 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:11.713 03:43:26 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:11.713 03:43:26 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:15:11.713 03:43:26 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:11.713 03:43:26 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:11.713 03:43:26 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:11.713 03:43:26 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:15:11.713 03:43:26 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:11.713 03:43:26 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:15:11.971 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:11.971 fio-3.35 00:15:11.971 Starting 1 thread 00:15:16.158 00:15:16.158 test: (groupid=0, jobs=1): err= 0: pid=71592: Fri Jul 26 03:43:30 2024 00:15:16.158 read: IOPS=14.5k, BW=56.6MiB/s (59.3MB/s)(113MiB/2001msec) 00:15:16.158 slat (nsec): min=4635, max=47771, avg=6673.80, stdev=2528.27 00:15:16.158 clat (usec): min=558, max=13139, avg=4390.80, stdev=1079.21 00:15:16.158 lat (usec): min=569, max=13147, avg=4397.48, stdev=1080.49 00:15:16.158 clat percentiles (usec): 00:15:16.158 | 1.00th=[ 2540], 5.00th=[ 3392], 10.00th=[ 3556], 20.00th=[ 3654], 00:15:16.158 | 30.00th=[ 3752], 40.00th=[ 3851], 50.00th=[ 3949], 60.00th=[ 4178], 00:15:16.158 | 70.00th=[ 4686], 80.00th=[ 4948], 90.00th=[ 5997], 95.00th=[ 6783], 00:15:16.158 | 99.00th=[ 7635], 99.50th=[ 7963], 99.90th=[10421], 99.95th=[11469], 00:15:16.158 | 99.99th=[13042] 00:15:16.158 bw ( KiB/s): min=56456, max=60712, per=100.00%, avg=58381.33, stdev=2156.76, samples=3 00:15:16.158 iops : min=14114, max=15178, avg=14595.33, stdev=539.19, samples=3 00:15:16.158 write: IOPS=14.5k, BW=56.7MiB/s (59.4MB/s)(113MiB/2001msec); 0 zone resets 00:15:16.158 slat (usec): min=4, max=214, avg= 6.87, stdev= 2.94 00:15:16.158 clat (usec): min=478, max=13229, avg=4412.31, stdev=1101.35 00:15:16.158 lat (usec): min=490, max=13236, avg=4419.18, stdev=1102.71 00:15:16.158 clat percentiles (usec): 00:15:16.158 | 1.00th=[ 2540], 5.00th=[ 3392], 10.00th=[ 3556], 20.00th=[ 3687], 00:15:16.158 | 30.00th=[ 3752], 40.00th=[ 3851], 50.00th=[ 3949], 60.00th=[ 4228], 00:15:16.158 | 70.00th=[ 4752], 80.00th=[ 5014], 90.00th=[ 5997], 95.00th=[ 6783], 00:15:16.158 | 99.00th=[ 7635], 99.50th=[ 8094], 99.90th=[10814], 99.95th=[11600], 00:15:16.158 | 99.99th=[12780] 00:15:16.158 bw ( KiB/s): min=56608, max=61008, per=100.00%, avg=58272.00, stdev=2387.86, samples=3 00:15:16.158 iops : min=14152, max=15252, avg=14568.00, stdev=596.97, samples=3 00:15:16.158 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:15:16.158 lat (msec) : 2=0.31%, 4=52.75%, 10=46.78%, 20=0.14% 00:15:16.158 cpu : usr=98.75%, sys=0.15%, ctx=5, majf=0, minf=606 00:15:16.158 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:16.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.158 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:16.158 issued rwts: total=28976,29021,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.158 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.158 00:15:16.158 Run status group 0 (all jobs): 00:15:16.158 READ: bw=56.6MiB/s (59.3MB/s), 56.6MiB/s-56.6MiB/s (59.3MB/s-59.3MB/s), io=113MiB (119MB), run=2001-2001msec 00:15:16.158 WRITE: bw=56.7MiB/s (59.4MB/s), 56.7MiB/s-56.7MiB/s (59.4MB/s-59.4MB/s), io=113MiB (119MB), run=2001-2001msec 00:15:16.416 ----------------------------------------------------- 00:15:16.416 Suppressions used: 00:15:16.416 count bytes template 00:15:16.417 1 32 /usr/src/fio/parse.c 00:15:16.417 1 8 libtcmalloc_minimal.so 00:15:16.417 ----------------------------------------------------- 00:15:16.417 00:15:16.417 03:43:31 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:15:16.417 03:43:31 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:15:16.417 00:15:16.417 real 0m17.203s 00:15:16.417 user 0m13.469s 00:15:16.417 sys 0m2.950s 00:15:16.417 03:43:31 nvme.nvme_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:16.417 03:43:31 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:15:16.417 ************************************ 00:15:16.417 END TEST nvme_fio 00:15:16.417 ************************************ 00:15:16.417 03:43:31 nvme -- common/autotest_common.sh@1142 -- # return 0 00:15:16.417 ************************************ 00:15:16.417 END TEST nvme 00:15:16.417 ************************************ 00:15:16.417 00:15:16.417 real 1m31.831s 00:15:16.417 user 3m46.835s 00:15:16.417 sys 0m15.395s 00:15:16.417 03:43:31 nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:16.417 03:43:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:16.417 03:43:31 -- common/autotest_common.sh@1142 -- # return 0 00:15:16.417 03:43:31 -- spdk/autotest.sh@217 -- # [[ 0 -eq 1 ]] 00:15:16.417 03:43:31 -- spdk/autotest.sh@221 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:15:16.417 03:43:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:16.417 03:43:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:16.417 03:43:31 -- common/autotest_common.sh@10 -- # set +x 00:15:16.417 ************************************ 00:15:16.417 START TEST nvme_scc 00:15:16.417 ************************************ 00:15:16.417 03:43:31 nvme_scc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:15:16.417 * Looking for test storage... 00:15:16.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:16.675 03:43:31 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:15:16.675 03:43:31 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:15:16.675 03:43:31 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:15:16.675 03:43:31 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:16.675 03:43:31 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:16.675 03:43:31 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.675 03:43:31 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.675 03:43:31 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.675 03:43:31 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.675 03:43:31 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.675 03:43:31 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.676 03:43:31 nvme_scc -- paths/export.sh@5 -- # export PATH 00:15:16.676 03:43:31 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.676 03:43:31 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:15:16.676 03:43:31 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:15:16.676 03:43:31 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:15:16.676 03:43:31 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:15:16.676 03:43:31 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:15:16.676 03:43:31 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:15:16.676 03:43:31 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:15:16.676 03:43:31 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:15:16.676 03:43:31 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:15:16.676 03:43:31 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:16.676 03:43:31 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:15:16.676 03:43:31 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:15:16.676 03:43:31 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:15:16.676 03:43:31 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:16.934 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:16.934 Waiting for block devices as requested 00:15:17.193 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:17.193 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:17.193 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:17.451 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:22.723 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:22.723 03:43:37 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:15:22.723 03:43:37 nvme_scc -- scripts/common.sh@15 -- # local i 00:15:22.723 03:43:37 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:15:22.723 03:43:37 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:15:22.723 03:43:37 nvme_scc -- scripts/common.sh@24 -- # return 0 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.723 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.724 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.725 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.726 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.727 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:15:22.728 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:15:22.729 03:43:37 nvme_scc -- scripts/common.sh@15 -- # local i 00:15:22.729 03:43:37 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:15:22.729 03:43:37 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:15:22.729 03:43:37 nvme_scc -- scripts/common.sh@24 -- # return 0 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:15:22.729 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.730 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.731 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.732 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.733 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:15:22.734 03:43:37 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:15:22.735 03:43:37 nvme_scc -- scripts/common.sh@15 -- # local i 00:15:22.735 03:43:37 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:15:22.735 03:43:37 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:15:22.735 03:43:37 nvme_scc -- scripts/common.sh@24 -- # return 0 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:15:22.735 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:15:22.736 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.737 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.000 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:15:23.000 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:15:23.000 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.000 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.000 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:23.000 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:15:23.000 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:15:23.000 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.000 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.000 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:23.000 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:15:23.000 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:15:23.000 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.000 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.000 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.000 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:15:23.001 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:15:23.002 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.003 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.004 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.005 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:15:23.006 03:43:37 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:15:23.007 03:43:37 nvme_scc -- scripts/common.sh@15 -- # local i 00:15:23.007 03:43:37 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:15:23.007 03:43:37 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:15:23.007 03:43:37 nvme_scc -- scripts/common.sh@24 -- # return 0 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.007 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.008 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:15:23.009 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:15:23.010 03:43:37 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme1 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@197 -- # echo nvme1 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme3 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@197 -- # echo nvme3 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme2 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@197 -- # echo nvme2 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@206 -- # echo nvme1 00:15:23.010 03:43:37 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:15:23.010 03:43:37 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:15:23.010 03:43:37 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:15:23.010 03:43:37 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:23.575 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:24.140 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:24.140 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:24.140 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:24.140 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:24.402 03:43:39 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:15:24.402 03:43:39 nvme_scc -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:15:24.402 03:43:39 nvme_scc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:24.402 03:43:39 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:15:24.402 ************************************ 00:15:24.402 START TEST nvme_simple_copy 00:15:24.402 ************************************ 00:15:24.402 03:43:39 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:15:24.660 Initializing NVMe Controllers 00:15:24.660 Attaching to 0000:00:10.0 00:15:24.660 Controller supports SCC. Attached to 0000:00:10.0 00:15:24.660 Namespace ID: 1 size: 6GB 00:15:24.660 Initialization complete. 00:15:24.660 00:15:24.660 Controller QEMU NVMe Ctrl (12340 ) 00:15:24.660 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:15:24.660 Namespace Block Size:4096 00:15:24.660 Writing LBAs 0 to 63 with Random Data 00:15:24.660 Copied LBAs from 0 - 63 to the Destination LBA 256 00:15:24.660 LBAs matching Written Data: 64 00:15:24.660 00:15:24.660 real 0m0.304s 00:15:24.660 user 0m0.113s 00:15:24.660 sys 0m0.087s 00:15:24.660 ************************************ 00:15:24.660 END TEST nvme_simple_copy 00:15:24.660 ************************************ 00:15:24.660 03:43:39 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:24.660 03:43:39 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:15:24.660 03:43:39 nvme_scc -- common/autotest_common.sh@1142 -- # return 0 00:15:24.660 ************************************ 00:15:24.660 END TEST nvme_scc 00:15:24.660 ************************************ 00:15:24.660 00:15:24.660 real 0m8.154s 00:15:24.660 user 0m1.305s 00:15:24.660 sys 0m1.726s 00:15:24.660 03:43:39 nvme_scc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:24.660 03:43:39 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:15:24.660 03:43:39 -- common/autotest_common.sh@1142 -- # return 0 00:15:24.660 03:43:39 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:15:24.660 03:43:39 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:15:24.660 03:43:39 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:15:24.660 03:43:39 -- spdk/autotest.sh@232 -- # [[ 1 -eq 1 ]] 00:15:24.660 03:43:39 -- spdk/autotest.sh@233 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:15:24.660 03:43:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:24.660 03:43:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:24.660 03:43:39 -- common/autotest_common.sh@10 -- # set +x 00:15:24.660 ************************************ 00:15:24.660 START TEST nvme_fdp 00:15:24.660 ************************************ 00:15:24.660 03:43:39 nvme_fdp -- common/autotest_common.sh@1123 -- # test/nvme/nvme_fdp.sh 00:15:24.660 * Looking for test storage... 00:15:24.660 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:24.660 03:43:39 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:15:24.660 03:43:39 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:15:24.660 03:43:39 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:15:24.660 03:43:39 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:24.660 03:43:39 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:24.660 03:43:39 nvme_fdp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:24.660 03:43:39 nvme_fdp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:24.660 03:43:39 nvme_fdp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:24.660 03:43:39 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.660 03:43:39 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.660 03:43:39 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.660 03:43:39 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:15:24.661 03:43:39 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.661 03:43:39 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:15:24.661 03:43:39 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:15:24.661 03:43:39 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:15:24.661 03:43:39 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:15:24.661 03:43:39 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:15:24.661 03:43:39 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:15:24.661 03:43:39 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:15:24.661 03:43:39 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:15:24.661 03:43:39 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:15:24.661 03:43:39 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:24.661 03:43:39 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:25.226 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:25.226 Waiting for block devices as requested 00:15:25.226 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:25.484 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:25.484 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:25.484 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:30.834 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:30.834 03:43:45 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:15:30.834 03:43:45 nvme_fdp -- scripts/common.sh@15 -- # local i 00:15:30.834 03:43:45 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:15:30.834 03:43:45 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:15:30.834 03:43:45 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.834 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.835 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.836 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.837 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:30.838 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:15:30.839 03:43:45 nvme_fdp -- scripts/common.sh@15 -- # local i 00:15:30.839 03:43:45 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:15:30.839 03:43:45 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:15:30.839 03:43:45 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.839 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:15:30.840 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.841 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:15:30.842 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.843 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:15:30.844 03:43:45 nvme_fdp -- scripts/common.sh@15 -- # local i 00:15:30.844 03:43:45 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:15:30.844 03:43:45 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:15:30.844 03:43:45 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.844 03:43:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:15:30.845 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.846 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.847 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:15:30.848 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.849 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.850 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:30.851 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:15:30.852 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:15:30.852 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.852 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.852 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.852 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:15:30.852 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:15:30.852 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.852 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.852 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.852 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:15:30.852 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:15:30.852 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.852 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.852 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.852 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:15:30.852 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:15:30.852 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.852 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.852 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.852 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:15:30.852 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:15:30.852 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.852 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.852 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:30.852 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:15:30.852 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:15:30.852 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.852 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.852 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.852 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:15:30.852 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:15:30.852 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.852 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.852 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:30.852 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:15:30.852 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:15:30.852 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:30.852 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:30.852 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:31.113 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:15:31.114 03:43:45 nvme_fdp -- scripts/common.sh@15 -- # local i 00:15:31.114 03:43:45 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:15:31.114 03:43:45 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:15:31.114 03:43:45 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.114 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:15:31.115 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.116 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:15:31.117 03:43:45 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@194 -- # [[ function == function ]] 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x88010 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@197 -- # echo nvme3 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@206 -- # echo nvme3 00:15:31.117 03:43:45 nvme_fdp -- nvme/functions.sh@207 -- # return 0 00:15:31.117 03:43:45 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:15:31.117 03:43:45 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:15:31.117 03:43:45 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:31.684 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:32.250 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:32.250 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:32.250 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:32.250 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:32.250 03:43:47 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:15:32.250 03:43:47 nvme_fdp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:15:32.250 03:43:47 nvme_fdp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:32.250 03:43:47 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:15:32.250 ************************************ 00:15:32.250 START TEST nvme_flexible_data_placement 00:15:32.250 ************************************ 00:15:32.250 03:43:47 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:15:32.509 Initializing NVMe Controllers 00:15:32.509 Attaching to 0000:00:13.0 00:15:32.509 Controller supports FDP Attached to 0000:00:13.0 00:15:32.509 Namespace ID: 1 Endurance Group ID: 1 00:15:32.509 Initialization complete. 00:15:32.509 00:15:32.509 ================================== 00:15:32.509 == FDP tests for Namespace: #01 == 00:15:32.509 ================================== 00:15:32.509 00:15:32.509 Get Feature: FDP: 00:15:32.509 ================= 00:15:32.509 Enabled: Yes 00:15:32.509 FDP configuration Index: 0 00:15:32.509 00:15:32.509 FDP configurations log page 00:15:32.509 =========================== 00:15:32.509 Number of FDP configurations: 1 00:15:32.509 Version: 0 00:15:32.509 Size: 112 00:15:32.509 FDP Configuration Descriptor: 0 00:15:32.509 Descriptor Size: 96 00:15:32.509 Reclaim Group Identifier format: 2 00:15:32.509 FDP Volatile Write Cache: Not Present 00:15:32.509 FDP Configuration: Valid 00:15:32.509 Vendor Specific Size: 0 00:15:32.509 Number of Reclaim Groups: 2 00:15:32.509 Number of Recalim Unit Handles: 8 00:15:32.509 Max Placement Identifiers: 128 00:15:32.509 Number of Namespaces Suppprted: 256 00:15:32.509 Reclaim unit Nominal Size: 6000000 bytes 00:15:32.509 Estimated Reclaim Unit Time Limit: Not Reported 00:15:32.509 RUH Desc #000: RUH Type: Initially Isolated 00:15:32.509 RUH Desc #001: RUH Type: Initially Isolated 00:15:32.509 RUH Desc #002: RUH Type: Initially Isolated 00:15:32.510 RUH Desc #003: RUH Type: Initially Isolated 00:15:32.510 RUH Desc #004: RUH Type: Initially Isolated 00:15:32.510 RUH Desc #005: RUH Type: Initially Isolated 00:15:32.510 RUH Desc #006: RUH Type: Initially Isolated 00:15:32.510 RUH Desc #007: RUH Type: Initially Isolated 00:15:32.510 00:15:32.510 FDP reclaim unit handle usage log page 00:15:32.510 ====================================== 00:15:32.510 Number of Reclaim Unit Handles: 8 00:15:32.510 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:15:32.510 RUH Usage Desc #001: RUH Attributes: Unused 00:15:32.510 RUH Usage Desc #002: RUH Attributes: Unused 00:15:32.510 RUH Usage Desc #003: RUH Attributes: Unused 00:15:32.510 RUH Usage Desc #004: RUH Attributes: Unused 00:15:32.510 RUH Usage Desc #005: RUH Attributes: Unused 00:15:32.510 RUH Usage Desc #006: RUH Attributes: Unused 00:15:32.510 RUH Usage Desc #007: RUH Attributes: Unused 00:15:32.510 00:15:32.510 FDP statistics log page 00:15:32.510 ======================= 00:15:32.510 Host bytes with metadata written: 732463104 00:15:32.510 Media bytes with metadata written: 732602368 00:15:32.510 Media bytes erased: 0 00:15:32.510 00:15:32.510 FDP Reclaim unit handle status 00:15:32.510 ============================== 00:15:32.510 Number of RUHS descriptors: 2 00:15:32.510 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000004578 00:15:32.510 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:15:32.510 00:15:32.510 FDP write on placement id: 0 success 00:15:32.510 00:15:32.510 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:15:32.510 00:15:32.510 IO mgmt send: RUH update for Placement ID: #0 Success 00:15:32.510 00:15:32.510 Get Feature: FDP Events for Placement handle: #0 00:15:32.510 ======================== 00:15:32.510 Number of FDP Events: 6 00:15:32.510 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:15:32.510 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:15:32.510 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:15:32.510 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:15:32.510 FDP Event: #4 Type: Media Reallocated Enabled: No 00:15:32.510 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:15:32.510 00:15:32.510 FDP events log page 00:15:32.510 =================== 00:15:32.510 Number of FDP events: 1 00:15:32.510 FDP Event #0: 00:15:32.510 Event Type: RU Not Written to Capacity 00:15:32.510 Placement Identifier: Valid 00:15:32.510 NSID: Valid 00:15:32.510 Location: Valid 00:15:32.510 Placement Identifier: 0 00:15:32.510 Event Timestamp: 7 00:15:32.510 Namespace Identifier: 1 00:15:32.510 Reclaim Group Identifier: 0 00:15:32.510 Reclaim Unit Handle Identifier: 0 00:15:32.510 00:15:32.510 FDP test passed 00:15:32.510 00:15:32.510 real 0m0.289s 00:15:32.510 user 0m0.086s 00:15:32.510 sys 0m0.100s 00:15:32.510 03:43:47 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:32.510 03:43:47 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:15:32.510 ************************************ 00:15:32.510 END TEST nvme_flexible_data_placement 00:15:32.510 ************************************ 00:15:32.769 03:43:47 nvme_fdp -- common/autotest_common.sh@1142 -- # return 0 00:15:32.769 00:15:32.769 real 0m7.972s 00:15:32.769 user 0m1.262s 00:15:32.769 sys 0m1.620s 00:15:32.769 03:43:47 nvme_fdp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:32.769 03:43:47 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:15:32.769 ************************************ 00:15:32.769 END TEST nvme_fdp 00:15:32.769 ************************************ 00:15:32.769 03:43:47 -- common/autotest_common.sh@1142 -- # return 0 00:15:32.769 03:43:47 -- spdk/autotest.sh@236 -- # [[ '' -eq 1 ]] 00:15:32.769 03:43:47 -- spdk/autotest.sh@240 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:15:32.769 03:43:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:32.769 03:43:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:32.769 03:43:47 -- common/autotest_common.sh@10 -- # set +x 00:15:32.769 ************************************ 00:15:32.769 START TEST nvme_rpc 00:15:32.769 ************************************ 00:15:32.769 03:43:47 nvme_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:15:32.769 * Looking for test storage... 00:15:32.769 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:32.769 03:43:47 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:32.769 03:43:47 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:15:32.769 03:43:47 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=() 00:15:32.769 03:43:47 nvme_rpc -- common/autotest_common.sh@1524 -- # local bdfs 00:15:32.769 03:43:47 nvme_rpc -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:15:32.769 03:43:47 nvme_rpc -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:15:32.769 03:43:47 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=() 00:15:32.769 03:43:47 nvme_rpc -- common/autotest_common.sh@1513 -- # local bdfs 00:15:32.769 03:43:47 nvme_rpc -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:32.769 03:43:47 nvme_rpc -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:32.769 03:43:47 nvme_rpc -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:15:32.769 03:43:47 nvme_rpc -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:15:32.769 03:43:47 nvme_rpc -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:15:32.769 03:43:47 nvme_rpc -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:15:32.769 03:43:47 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:15:32.769 03:43:47 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=72931 00:15:32.769 03:43:47 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:15:32.769 03:43:47 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:15:32.769 03:43:47 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 72931 00:15:32.769 03:43:47 nvme_rpc -- common/autotest_common.sh@829 -- # '[' -z 72931 ']' 00:15:32.769 03:43:47 nvme_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.769 03:43:47 nvme_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:32.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.769 03:43:47 nvme_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.769 03:43:47 nvme_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:32.769 03:43:47 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.027 [2024-07-26 03:43:47.745452] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:15:33.027 [2024-07-26 03:43:47.745610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72931 ] 00:15:33.027 [2024-07-26 03:43:47.907686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:33.286 [2024-07-26 03:43:48.139839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.286 [2024-07-26 03:43:48.139843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.248 03:43:48 nvme_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:34.248 03:43:48 nvme_rpc -- common/autotest_common.sh@862 -- # return 0 00:15:34.248 03:43:48 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:15:34.248 Nvme0n1 00:15:34.509 03:43:49 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:15:34.509 03:43:49 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:15:34.509 request: 00:15:34.509 { 00:15:34.509 "bdev_name": "Nvme0n1", 00:15:34.509 "filename": "non_existing_file", 00:15:34.509 "method": "bdev_nvme_apply_firmware", 00:15:34.509 "req_id": 1 00:15:34.509 } 00:15:34.509 Got JSON-RPC error response 00:15:34.509 response: 00:15:34.509 { 00:15:34.509 "code": -32603, 00:15:34.509 "message": "open file failed." 00:15:34.509 } 00:15:34.509 03:43:49 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:15:34.509 03:43:49 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:15:34.509 03:43:49 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:15:34.768 03:43:49 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:34.768 03:43:49 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 72931 00:15:34.768 03:43:49 nvme_rpc -- common/autotest_common.sh@948 -- # '[' -z 72931 ']' 00:15:34.768 03:43:49 nvme_rpc -- common/autotest_common.sh@952 -- # kill -0 72931 00:15:34.768 03:43:49 nvme_rpc -- common/autotest_common.sh@953 -- # uname 00:15:34.768 03:43:49 nvme_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:34.768 03:43:49 nvme_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72931 00:15:35.026 03:43:49 nvme_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:35.026 03:43:49 nvme_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:35.026 killing process with pid 72931 00:15:35.026 03:43:49 nvme_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72931' 00:15:35.026 03:43:49 nvme_rpc -- common/autotest_common.sh@967 -- # kill 72931 00:15:35.026 03:43:49 nvme_rpc -- common/autotest_common.sh@972 -- # wait 72931 00:15:36.926 00:15:36.926 real 0m4.294s 00:15:36.926 user 0m8.101s 00:15:36.926 sys 0m0.564s 00:15:36.926 03:43:51 nvme_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:36.926 ************************************ 00:15:36.926 END TEST nvme_rpc 00:15:36.926 ************************************ 00:15:36.926 03:43:51 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.926 03:43:51 -- common/autotest_common.sh@1142 -- # return 0 00:15:36.926 03:43:51 -- spdk/autotest.sh@241 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:15:36.926 03:43:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:36.926 03:43:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:36.926 03:43:51 -- common/autotest_common.sh@10 -- # set +x 00:15:36.926 ************************************ 00:15:36.926 START TEST nvme_rpc_timeouts 00:15:36.926 ************************************ 00:15:36.926 03:43:51 nvme_rpc_timeouts -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:15:37.184 * Looking for test storage... 00:15:37.184 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:37.184 03:43:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:37.184 03:43:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_73007 00:15:37.184 03:43:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_73007 00:15:37.184 03:43:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=73031 00:15:37.184 03:43:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:15:37.184 03:43:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:15:37.184 03:43:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 73031 00:15:37.184 03:43:51 nvme_rpc_timeouts -- common/autotest_common.sh@829 -- # '[' -z 73031 ']' 00:15:37.184 03:43:51 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.184 03:43:51 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:37.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.184 03:43:51 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.184 03:43:51 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:37.184 03:43:51 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:15:37.184 [2024-07-26 03:43:52.051070] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:15:37.184 [2024-07-26 03:43:52.051257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73031 ] 00:15:37.442 [2024-07-26 03:43:52.225605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:37.700 [2024-07-26 03:43:52.446223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.700 [2024-07-26 03:43:52.446234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.265 03:43:53 nvme_rpc_timeouts -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:38.525 03:43:53 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # return 0 00:15:38.525 Checking default timeout settings: 00:15:38.525 03:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:15:38.525 03:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:38.782 Making settings changes with rpc: 00:15:38.782 03:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:15:38.782 03:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:15:39.040 Check default vs. modified settings: 00:15:39.040 03:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:15:39.040 03:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:39.299 03:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:15:39.299 03:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:15:39.299 03:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:15:39.299 03:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_73007 00:15:39.299 03:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:39.559 03:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:15:39.559 03:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_73007 00:15:39.559 03:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:15:39.559 03:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:39.559 03:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:15:39.559 03:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:15:39.559 Setting action_on_timeout is changed as expected. 00:15:39.559 03:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:15:39.559 03:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:15:39.559 03:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_73007 00:15:39.559 03:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:15:39.559 03:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:39.559 03:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:15:39.559 03:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_73007 00:15:39.559 03:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:15:39.559 03:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:39.559 03:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:15:39.559 Setting timeout_us is changed as expected. 00:15:39.559 03:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:15:39.559 03:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:15:39.559 03:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:15:39.559 03:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:15:39.559 03:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_73007 00:15:39.559 03:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:39.559 03:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:15:39.559 03:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:15:39.559 03:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_73007 00:15:39.559 03:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:39.559 03:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:15:39.559 Setting timeout_admin_us is changed as expected. 00:15:39.559 03:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:15:39.559 03:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:15:39.559 03:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:15:39.559 03:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_73007 /tmp/settings_modified_73007 00:15:39.559 03:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 73031 00:15:39.559 03:43:54 nvme_rpc_timeouts -- common/autotest_common.sh@948 -- # '[' -z 73031 ']' 00:15:39.559 03:43:54 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # kill -0 73031 00:15:39.559 03:43:54 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # uname 00:15:39.559 03:43:54 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:39.559 03:43:54 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73031 00:15:39.559 03:43:54 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:39.559 03:43:54 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:39.559 killing process with pid 73031 00:15:39.559 03:43:54 nvme_rpc_timeouts -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73031' 00:15:39.559 03:43:54 nvme_rpc_timeouts -- common/autotest_common.sh@967 -- # kill 73031 00:15:39.559 03:43:54 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # wait 73031 00:15:42.088 RPC TIMEOUT SETTING TEST PASSED. 00:15:42.088 03:43:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:15:42.088 00:15:42.088 real 0m4.594s 00:15:42.088 user 0m8.799s 00:15:42.088 sys 0m0.573s 00:15:42.088 03:43:56 nvme_rpc_timeouts -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:42.088 03:43:56 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:15:42.088 ************************************ 00:15:42.088 END TEST nvme_rpc_timeouts 00:15:42.088 ************************************ 00:15:42.088 03:43:56 -- common/autotest_common.sh@1142 -- # return 0 00:15:42.088 03:43:56 -- spdk/autotest.sh@243 -- # uname -s 00:15:42.088 03:43:56 -- spdk/autotest.sh@243 -- # '[' Linux = Linux ']' 00:15:42.088 03:43:56 -- spdk/autotest.sh@244 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:15:42.088 03:43:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:42.088 03:43:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:42.088 03:43:56 -- common/autotest_common.sh@10 -- # set +x 00:15:42.088 ************************************ 00:15:42.088 START TEST sw_hotplug 00:15:42.088 ************************************ 00:15:42.088 03:43:56 sw_hotplug -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:15:42.088 * Looking for test storage... 00:15:42.088 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:42.088 03:43:56 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:42.088 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:42.088 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:42.088 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:42.088 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:42.347 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:42.347 03:43:57 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:15:42.347 03:43:57 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:15:42.347 03:43:57 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:15:42.347 03:43:57 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@230 -- # local class 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@15 -- # local i 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@15 -- # local i 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:12.0 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@15 -- # local i 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:12.0 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:13.0 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@15 -- # local i 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:13.0 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@325 -- # (( 4 )) 00:15:42.347 03:43:57 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:15:42.347 03:43:57 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:15:42.347 03:43:57 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:15:42.347 03:43:57 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:42.606 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:42.863 Waiting for block devices as requested 00:15:42.863 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:42.863 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:42.863 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:43.122 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:48.391 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:48.391 03:44:02 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:15:48.391 03:44:02 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:48.648 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:15:48.648 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:48.648 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:15:48.907 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:15:49.165 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:49.165 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:49.165 03:44:04 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:15:49.165 03:44:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:49.423 03:44:04 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:15:49.423 03:44:04 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:15:49.423 03:44:04 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=73891 00:15:49.423 03:44:04 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:15:49.423 03:44:04 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:15:49.423 03:44:04 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:15:49.423 03:44:04 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:15:49.423 03:44:04 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:15:49.423 03:44:04 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:15:49.423 03:44:04 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:15:49.424 03:44:04 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:15:49.424 03:44:04 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 false 00:15:49.424 03:44:04 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:15:49.424 03:44:04 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:15:49.424 03:44:04 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:15:49.424 03:44:04 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:15:49.424 03:44:04 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:15:49.682 Initializing NVMe Controllers 00:15:49.682 Attaching to 0000:00:10.0 00:15:49.682 Attaching to 0000:00:11.0 00:15:49.682 Attached to 0000:00:10.0 00:15:49.682 Attached to 0000:00:11.0 00:15:49.682 Initialization complete. Starting I/O... 00:15:49.682 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:15:49.682 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:15:49.682 00:15:50.616 QEMU NVMe Ctrl (12340 ): 1075 I/Os completed (+1075) 00:15:50.616 QEMU NVMe Ctrl (12341 ): 1213 I/Os completed (+1213) 00:15:50.616 00:15:51.552 QEMU NVMe Ctrl (12340 ): 2477 I/Os completed (+1402) 00:15:51.552 QEMU NVMe Ctrl (12341 ): 2767 I/Os completed (+1554) 00:15:51.552 00:15:52.513 QEMU NVMe Ctrl (12340 ): 4049 I/Os completed (+1572) 00:15:52.513 QEMU NVMe Ctrl (12341 ): 4533 I/Os completed (+1766) 00:15:52.513 00:15:53.885 QEMU NVMe Ctrl (12340 ): 5695 I/Os completed (+1646) 00:15:53.885 QEMU NVMe Ctrl (12341 ): 6384 I/Os completed (+1851) 00:15:53.885 00:15:54.815 QEMU NVMe Ctrl (12340 ): 7285 I/Os completed (+1590) 00:15:54.815 QEMU NVMe Ctrl (12341 ): 8260 I/Os completed (+1876) 00:15:54.815 00:15:55.377 03:44:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:55.377 03:44:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:55.377 03:44:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:55.377 [2024-07-26 03:44:10.118366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:15:55.377 Controller removed: QEMU NVMe Ctrl (12340 ) 00:15:55.377 [2024-07-26 03:44:10.120327] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:55.377 [2024-07-26 03:44:10.120405] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:55.377 [2024-07-26 03:44:10.120435] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:55.377 [2024-07-26 03:44:10.120461] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:55.377 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:15:55.377 [2024-07-26 03:44:10.123439] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:55.377 [2024-07-26 03:44:10.123509] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:55.377 [2024-07-26 03:44:10.123535] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:55.377 [2024-07-26 03:44:10.123560] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:55.377 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:15:55.377 EAL: Scan for (pci) bus failed. 00:15:55.377 03:44:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:55.377 03:44:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:55.377 [2024-07-26 03:44:10.146084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:15:55.377 Controller removed: QEMU NVMe Ctrl (12341 ) 00:15:55.377 [2024-07-26 03:44:10.147885] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:55.377 [2024-07-26 03:44:10.147950] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:55.377 [2024-07-26 03:44:10.147985] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:55.377 [2024-07-26 03:44:10.148008] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:55.377 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:15:55.377 [2024-07-26 03:44:10.150628] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:55.377 [2024-07-26 03:44:10.150686] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:55.377 [2024-07-26 03:44:10.150715] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:55.377 [2024-07-26 03:44:10.150735] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:55.377 EAL: Cannot open sysfs resource 00:15:55.377 EAL: pci_scan_one(): cannot parse resource 00:15:55.377 EAL: Scan for (pci) bus failed. 00:15:55.377 03:44:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:15:55.377 03:44:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:55.635 03:44:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:55.635 03:44:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:55.635 03:44:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:55.635 00:15:55.635 03:44:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:55.635 03:44:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:55.635 03:44:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:55.635 03:44:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:55.635 03:44:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:55.635 Attaching to 0000:00:10.0 00:15:55.635 Attached to 0000:00:10.0 00:15:55.635 03:44:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:55.635 03:44:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:55.635 03:44:10 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:55.635 Attaching to 0000:00:11.0 00:15:55.635 Attached to 0000:00:11.0 00:15:56.568 QEMU NVMe Ctrl (12340 ): 1447 I/Os completed (+1447) 00:15:56.568 QEMU NVMe Ctrl (12341 ): 1682 I/Os completed (+1682) 00:15:56.568 00:15:57.501 QEMU NVMe Ctrl (12340 ): 2952 I/Os completed (+1505) 00:15:57.501 QEMU NVMe Ctrl (12341 ): 3453 I/Os completed (+1771) 00:15:57.501 00:15:58.875 QEMU NVMe Ctrl (12340 ): 4720 I/Os completed (+1768) 00:15:58.875 QEMU NVMe Ctrl (12341 ): 5362 I/Os completed (+1909) 00:15:58.875 00:15:59.809 QEMU NVMe Ctrl (12340 ): 6407 I/Os completed (+1687) 00:15:59.809 QEMU NVMe Ctrl (12341 ): 7351 I/Os completed (+1989) 00:15:59.809 00:16:00.778 QEMU NVMe Ctrl (12340 ): 8079 I/Os completed (+1672) 00:16:00.778 QEMU NVMe Ctrl (12341 ): 9163 I/Os completed (+1812) 00:16:00.778 00:16:01.714 QEMU NVMe Ctrl (12340 ): 9674 I/Os completed (+1595) 00:16:01.714 QEMU NVMe Ctrl (12341 ): 10929 I/Os completed (+1766) 00:16:01.714 00:16:02.648 QEMU NVMe Ctrl (12340 ): 11477 I/Os completed (+1803) 00:16:02.648 QEMU NVMe Ctrl (12341 ): 12971 I/Os completed (+2042) 00:16:02.648 00:16:03.618 QEMU NVMe Ctrl (12340 ): 13084 I/Os completed (+1607) 00:16:03.618 QEMU NVMe Ctrl (12341 ): 14805 I/Os completed (+1834) 00:16:03.618 00:16:04.552 QEMU NVMe Ctrl (12340 ): 14654 I/Os completed (+1570) 00:16:04.552 QEMU NVMe Ctrl (12341 ): 16617 I/Os completed (+1812) 00:16:04.552 00:16:05.486 QEMU NVMe Ctrl (12340 ): 16441 I/Os completed (+1787) 00:16:05.486 QEMU NVMe Ctrl (12341 ): 18611 I/Os completed (+1994) 00:16:05.486 00:16:06.862 QEMU NVMe Ctrl (12340 ): 18078 I/Os completed (+1637) 00:16:06.862 QEMU NVMe Ctrl (12341 ): 20784 I/Os completed (+2173) 00:16:06.862 00:16:07.797 QEMU NVMe Ctrl (12340 ): 19950 I/Os completed (+1872) 00:16:07.797 QEMU NVMe Ctrl (12341 ): 22626 I/Os completed (+1842) 00:16:07.797 00:16:07.797 03:44:22 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:16:07.797 03:44:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:07.797 03:44:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:07.797 03:44:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:07.797 [2024-07-26 03:44:22.486756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:16:07.797 Controller removed: QEMU NVMe Ctrl (12340 ) 00:16:07.797 [2024-07-26 03:44:22.489799] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:07.797 [2024-07-26 03:44:22.489947] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:07.797 [2024-07-26 03:44:22.489985] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:07.797 [2024-07-26 03:44:22.490016] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:07.797 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:16:07.797 [2024-07-26 03:44:22.494479] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:07.797 [2024-07-26 03:44:22.494623] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:07.797 [2024-07-26 03:44:22.494700] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:07.797 [2024-07-26 03:44:22.494750] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:07.797 03:44:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:07.797 03:44:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:07.797 [2024-07-26 03:44:22.520777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:16:07.797 Controller removed: QEMU NVMe Ctrl (12341 ) 00:16:07.797 [2024-07-26 03:44:22.523164] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:07.797 [2024-07-26 03:44:22.523233] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:07.797 [2024-07-26 03:44:22.523286] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:07.797 [2024-07-26 03:44:22.523326] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:07.797 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:16:07.797 [2024-07-26 03:44:22.526031] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:07.797 [2024-07-26 03:44:22.526089] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:07.797 [2024-07-26 03:44:22.526117] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:07.797 [2024-07-26 03:44:22.526139] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:07.797 03:44:22 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:16:07.797 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:16:07.797 EAL: Scan for (pci) bus failed. 00:16:07.797 03:44:22 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:07.797 03:44:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:07.797 03:44:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:07.797 03:44:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:08.055 03:44:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:08.055 03:44:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:08.055 03:44:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:08.055 03:44:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:08.055 03:44:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:08.055 Attaching to 0000:00:10.0 00:16:08.055 Attached to 0000:00:10.0 00:16:08.055 03:44:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:08.055 03:44:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:08.055 03:44:22 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:08.055 Attaching to 0000:00:11.0 00:16:08.055 Attached to 0000:00:11.0 00:16:08.623 QEMU NVMe Ctrl (12340 ): 1058 I/Os completed (+1058) 00:16:08.623 QEMU NVMe Ctrl (12341 ): 994 I/Os completed (+994) 00:16:08.623 00:16:09.557 QEMU NVMe Ctrl (12340 ): 2659 I/Os completed (+1601) 00:16:09.557 QEMU NVMe Ctrl (12341 ): 2857 I/Os completed (+1863) 00:16:09.557 00:16:10.491 QEMU NVMe Ctrl (12340 ): 4441 I/Os completed (+1782) 00:16:10.491 QEMU NVMe Ctrl (12341 ): 4699 I/Os completed (+1842) 00:16:10.491 00:16:11.867 QEMU NVMe Ctrl (12340 ): 6178 I/Os completed (+1737) 00:16:11.867 QEMU NVMe Ctrl (12341 ): 6687 I/Os completed (+1988) 00:16:11.867 00:16:12.802 QEMU NVMe Ctrl (12340 ): 7923 I/Os completed (+1745) 00:16:12.802 QEMU NVMe Ctrl (12341 ): 8550 I/Os completed (+1863) 00:16:12.802 00:16:13.735 QEMU NVMe Ctrl (12340 ): 9498 I/Os completed (+1575) 00:16:13.736 QEMU NVMe Ctrl (12341 ): 10400 I/Os completed (+1850) 00:16:13.736 00:16:14.671 QEMU NVMe Ctrl (12340 ): 11220 I/Os completed (+1722) 00:16:14.671 QEMU NVMe Ctrl (12341 ): 12235 I/Os completed (+1835) 00:16:14.671 00:16:15.608 QEMU NVMe Ctrl (12340 ): 12861 I/Os completed (+1641) 00:16:15.608 QEMU NVMe Ctrl (12341 ): 14177 I/Os completed (+1942) 00:16:15.608 00:16:16.545 QEMU NVMe Ctrl (12340 ): 14354 I/Os completed (+1493) 00:16:16.545 QEMU NVMe Ctrl (12341 ): 15908 I/Os completed (+1731) 00:16:16.545 00:16:17.480 QEMU NVMe Ctrl (12340 ): 15960 I/Os completed (+1606) 00:16:17.480 QEMU NVMe Ctrl (12341 ): 17801 I/Os completed (+1893) 00:16:17.480 00:16:18.855 QEMU NVMe Ctrl (12340 ): 17836 I/Os completed (+1876) 00:16:18.855 QEMU NVMe Ctrl (12341 ): 19898 I/Os completed (+2097) 00:16:18.855 00:16:19.789 QEMU NVMe Ctrl (12340 ): 19334 I/Os completed (+1498) 00:16:19.789 QEMU NVMe Ctrl (12341 ): 21706 I/Os completed (+1808) 00:16:19.789 00:16:20.047 03:44:34 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:16:20.047 03:44:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:20.047 03:44:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:20.047 03:44:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:20.047 [2024-07-26 03:44:34.861224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:16:20.047 Controller removed: QEMU NVMe Ctrl (12340 ) 00:16:20.047 [2024-07-26 03:44:34.863477] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:20.047 [2024-07-26 03:44:34.863553] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:20.047 [2024-07-26 03:44:34.863587] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:20.047 [2024-07-26 03:44:34.863618] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:20.047 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:16:20.047 [2024-07-26 03:44:34.867155] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:20.047 [2024-07-26 03:44:34.867361] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:20.048 [2024-07-26 03:44:34.867460] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:20.048 [2024-07-26 03:44:34.867540] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:20.048 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:16:20.048 EAL: Scan for (pci) bus failed. 00:16:20.048 03:44:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:20.048 03:44:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:20.048 [2024-07-26 03:44:34.897794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:16:20.048 Controller removed: QEMU NVMe Ctrl (12341 ) 00:16:20.048 [2024-07-26 03:44:34.899748] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:20.048 [2024-07-26 03:44:34.899835] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:20.048 [2024-07-26 03:44:34.899870] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:20.048 [2024-07-26 03:44:34.899894] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:20.048 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:16:20.048 [2024-07-26 03:44:34.902431] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:20.048 [2024-07-26 03:44:34.902491] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:20.048 [2024-07-26 03:44:34.902521] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:20.048 [2024-07-26 03:44:34.902541] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:20.048 03:44:34 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:16:20.048 03:44:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:20.307 03:44:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:20.307 03:44:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:20.307 03:44:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:20.307 03:44:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:20.307 03:44:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:20.307 03:44:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:20.307 03:44:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:20.307 03:44:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:20.307 Attaching to 0000:00:10.0 00:16:20.307 Attached to 0000:00:10.0 00:16:20.565 03:44:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:20.565 03:44:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:20.565 03:44:35 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:20.565 Attaching to 0000:00:11.0 00:16:20.565 Attached to 0000:00:11.0 00:16:20.565 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:16:20.565 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:16:20.565 [2024-07-26 03:44:35.248873] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:16:32.766 03:44:47 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:16:32.766 03:44:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:32.766 03:44:47 sw_hotplug -- common/autotest_common.sh@715 -- # time=43.12 00:16:32.766 03:44:47 sw_hotplug -- common/autotest_common.sh@716 -- # echo 43.12 00:16:32.766 03:44:47 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:16:32.766 03:44:47 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.12 00:16:32.766 03:44:47 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.12 2 00:16:32.766 remove_attach_helper took 43.12s to complete (handling 2 nvme drive(s)) 03:44:47 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:16:39.324 03:44:53 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 73891 00:16:39.324 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (73891) - No such process 00:16:39.324 03:44:53 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 73891 00:16:39.324 03:44:53 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:16:39.324 03:44:53 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:16:39.324 03:44:53 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:16:39.324 03:44:53 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=74425 00:16:39.325 03:44:53 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:39.325 03:44:53 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:16:39.325 03:44:53 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 74425 00:16:39.325 03:44:53 sw_hotplug -- common/autotest_common.sh@829 -- # '[' -z 74425 ']' 00:16:39.325 03:44:53 sw_hotplug -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.325 03:44:53 sw_hotplug -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:39.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.325 03:44:53 sw_hotplug -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.325 03:44:53 sw_hotplug -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:39.325 03:44:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:39.325 [2024-07-26 03:44:53.344216] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:16:39.325 [2024-07-26 03:44:53.344379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74425 ] 00:16:39.325 [2024-07-26 03:44:53.505601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.325 [2024-07-26 03:44:53.691235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.583 03:44:54 sw_hotplug -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:39.583 03:44:54 sw_hotplug -- common/autotest_common.sh@862 -- # return 0 00:16:39.583 03:44:54 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:16:39.583 03:44:54 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.583 03:44:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:39.583 03:44:54 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.583 03:44:54 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:16:39.583 03:44:54 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:16:39.583 03:44:54 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:16:39.583 03:44:54 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:16:39.583 03:44:54 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:16:39.583 03:44:54 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:16:39.583 03:44:54 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:16:39.583 03:44:54 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:16:39.583 03:44:54 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:16:39.583 03:44:54 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:16:39.583 03:44:54 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:16:39.583 03:44:54 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:16:39.583 03:44:54 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:16:46.149 03:45:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:46.149 03:45:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:46.149 03:45:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:46.149 03:45:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:46.149 03:45:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:46.149 03:45:00 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:46.149 03:45:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:46.149 03:45:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:46.149 03:45:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:46.149 03:45:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:46.149 03:45:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:46.149 03:45:00 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.149 03:45:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:46.149 [2024-07-26 03:45:00.498862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:16:46.149 03:45:00 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.149 [2024-07-26 03:45:00.501578] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:46.149 [2024-07-26 03:45:00.501632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:46.149 [2024-07-26 03:45:00.501670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:46.149 [2024-07-26 03:45:00.501699] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:46.149 [2024-07-26 03:45:00.501721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:46.149 [2024-07-26 03:45:00.501737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:46.149 [2024-07-26 03:45:00.501755] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:46.149 [2024-07-26 03:45:00.501769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:46.149 [2024-07-26 03:45:00.501786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:46.149 [2024-07-26 03:45:00.501801] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:46.149 [2024-07-26 03:45:00.501837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:46.149 [2024-07-26 03:45:00.501855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:46.149 03:45:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:16:46.149 03:45:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:46.149 [2024-07-26 03:45:00.898872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:16:46.149 [2024-07-26 03:45:00.901683] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:46.149 [2024-07-26 03:45:00.901735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:46.149 [2024-07-26 03:45:00.901757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:46.149 [2024-07-26 03:45:00.901788] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:46.149 [2024-07-26 03:45:00.901805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:46.149 [2024-07-26 03:45:00.901838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:46.149 [2024-07-26 03:45:00.901857] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:46.149 [2024-07-26 03:45:00.901875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:46.149 [2024-07-26 03:45:00.901890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:46.149 [2024-07-26 03:45:00.901908] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:46.149 [2024-07-26 03:45:00.901922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:46.149 [2024-07-26 03:45:00.901939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:46.149 03:45:01 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:16:46.149 03:45:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:46.149 03:45:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:46.149 03:45:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:46.149 03:45:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:46.149 03:45:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:46.149 03:45:01 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.149 03:45:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:46.149 03:45:01 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.407 03:45:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:46.407 03:45:01 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:46.407 03:45:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:46.407 03:45:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:46.407 03:45:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:46.407 03:45:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:46.407 03:45:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:46.407 03:45:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:46.407 03:45:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:46.407 03:45:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:46.665 03:45:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:46.665 03:45:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:46.665 03:45:01 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:58.880 03:45:13 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:58.880 03:45:13 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:58.880 03:45:13 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:58.880 03:45:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:58.880 03:45:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:58.880 03:45:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:58.880 03:45:13 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.880 03:45:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:58.880 03:45:13 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.880 03:45:13 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:58.880 03:45:13 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:58.880 03:45:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:58.880 03:45:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:58.880 03:45:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:58.880 03:45:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:58.880 03:45:13 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:58.880 03:45:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:58.880 03:45:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:58.880 03:45:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:58.880 03:45:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:58.880 03:45:13 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.880 03:45:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:58.880 03:45:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:58.880 [2024-07-26 03:45:13.499186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:16:58.880 [2024-07-26 03:45:13.502042] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:58.880 [2024-07-26 03:45:13.502094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.880 [2024-07-26 03:45:13.502119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.880 [2024-07-26 03:45:13.502183] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:58.880 [2024-07-26 03:45:13.502213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.880 [2024-07-26 03:45:13.502230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.880 [2024-07-26 03:45:13.502250] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:58.880 [2024-07-26 03:45:13.502265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.880 [2024-07-26 03:45:13.502282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.880 [2024-07-26 03:45:13.502297] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:58.880 [2024-07-26 03:45:13.502319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.880 [2024-07-26 03:45:13.502334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.880 03:45:13 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.880 03:45:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:16:58.880 03:45:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:59.138 [2024-07-26 03:45:13.899160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:16:59.138 [2024-07-26 03:45:13.901934] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:59.138 [2024-07-26 03:45:13.901990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:59.138 [2024-07-26 03:45:13.902013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.138 [2024-07-26 03:45:13.902044] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:59.138 [2024-07-26 03:45:13.902061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:59.138 [2024-07-26 03:45:13.902112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.138 [2024-07-26 03:45:13.902129] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:59.138 [2024-07-26 03:45:13.902147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:59.138 [2024-07-26 03:45:13.902162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.138 [2024-07-26 03:45:13.902181] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:59.138 [2024-07-26 03:45:13.902195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:59.138 [2024-07-26 03:45:13.902212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.138 03:45:14 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:16:59.138 03:45:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:59.138 03:45:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:59.138 03:45:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:59.138 03:45:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:59.138 03:45:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:59.138 03:45:14 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.138 03:45:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:59.395 03:45:14 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.395 03:45:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:59.395 03:45:14 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:59.395 03:45:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:59.395 03:45:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:59.395 03:45:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:59.395 03:45:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:59.395 03:45:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:59.395 03:45:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:59.395 03:45:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:59.395 03:45:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:59.652 03:45:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:59.652 03:45:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:59.652 03:45:14 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:11.850 03:45:26 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:17:11.850 03:45:26 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:17:11.850 03:45:26 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:17:11.850 03:45:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:11.850 03:45:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:11.850 03:45:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:11.850 03:45:26 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.850 03:45:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:11.850 03:45:26 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.850 03:45:26 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:17:11.850 03:45:26 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:11.850 03:45:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:11.850 03:45:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:11.850 03:45:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:11.850 03:45:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:11.850 03:45:26 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:11.850 03:45:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:11.850 03:45:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:11.850 03:45:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:11.850 03:45:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:11.850 03:45:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:11.850 03:45:26 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.850 03:45:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:11.850 [2024-07-26 03:45:26.500157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:17:11.850 [2024-07-26 03:45:26.502945] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:11.850 [2024-07-26 03:45:26.502997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:11.850 [2024-07-26 03:45:26.503022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:11.850 [2024-07-26 03:45:26.503050] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:11.850 [2024-07-26 03:45:26.503070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:11.850 [2024-07-26 03:45:26.503086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:11.850 [2024-07-26 03:45:26.503106] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:11.850 [2024-07-26 03:45:26.503121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:11.850 [2024-07-26 03:45:26.503137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:11.850 [2024-07-26 03:45:26.503154] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:11.850 [2024-07-26 03:45:26.503170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:11.850 [2024-07-26 03:45:26.503185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:11.850 03:45:26 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.850 03:45:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:17:11.850 03:45:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:12.109 [2024-07-26 03:45:26.900190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:17:12.109 [2024-07-26 03:45:26.902894] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:12.109 [2024-07-26 03:45:26.902978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.109 [2024-07-26 03:45:26.903000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.109 [2024-07-26 03:45:26.903029] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:12.109 [2024-07-26 03:45:26.903046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.109 [2024-07-26 03:45:26.903063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.109 [2024-07-26 03:45:26.903094] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:12.109 [2024-07-26 03:45:26.903110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.109 [2024-07-26 03:45:26.903136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.109 [2024-07-26 03:45:26.903158] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:12.109 [2024-07-26 03:45:26.903173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.109 [2024-07-26 03:45:26.903205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.368 03:45:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:17:12.368 03:45:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:12.368 03:45:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:12.368 03:45:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:12.368 03:45:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:12.368 03:45:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:12.368 03:45:27 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.368 03:45:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:12.368 03:45:27 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.368 03:45:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:12.368 03:45:27 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:12.368 03:45:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:12.368 03:45:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:12.368 03:45:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:12.627 03:45:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:12.627 03:45:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:12.627 03:45:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:12.627 03:45:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:12.627 03:45:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:12.627 03:45:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:12.627 03:45:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:12.627 03:45:27 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:24.838 03:45:39 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:17:24.838 03:45:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:17:24.838 03:45:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:17:24.838 03:45:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:24.838 03:45:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:24.838 03:45:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:24.838 03:45:39 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.838 03:45:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:24.838 03:45:39 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.838 03:45:39 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:17:24.838 03:45:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:24.838 03:45:39 sw_hotplug -- common/autotest_common.sh@715 -- # time=45.05 00:17:24.838 03:45:39 sw_hotplug -- common/autotest_common.sh@716 -- # echo 45.05 00:17:24.838 03:45:39 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:17:24.838 03:45:39 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.05 00:17:24.838 03:45:39 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.05 2 00:17:24.838 remove_attach_helper took 45.05s to complete (handling 2 nvme drive(s)) 03:45:39 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:17:24.838 03:45:39 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.838 03:45:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:24.838 03:45:39 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.838 03:45:39 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:17:24.838 03:45:39 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.838 03:45:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:24.838 03:45:39 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.838 03:45:39 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:17:24.838 03:45:39 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:17:24.838 03:45:39 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:17:24.838 03:45:39 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:17:24.838 03:45:39 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:17:24.838 03:45:39 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:17:24.838 03:45:39 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:17:24.838 03:45:39 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:17:24.838 03:45:39 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:17:24.838 03:45:39 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:17:24.838 03:45:39 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:17:24.838 03:45:39 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:17:24.838 03:45:39 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:17:31.398 03:45:45 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:31.398 03:45:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:31.398 03:45:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:31.398 03:45:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:31.398 03:45:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:31.398 03:45:45 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:31.398 03:45:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:31.398 03:45:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:31.398 03:45:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:31.398 03:45:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:31.398 03:45:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:31.398 03:45:45 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.398 03:45:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:31.398 03:45:45 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.398 [2024-07-26 03:45:45.583251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:17:31.398 [2024-07-26 03:45:45.585305] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.398 [2024-07-26 03:45:45.585379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.398 [2024-07-26 03:45:45.585404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.398 [2024-07-26 03:45:45.585430] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.398 [2024-07-26 03:45:45.585448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.398 [2024-07-26 03:45:45.585463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.398 [2024-07-26 03:45:45.585499] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.398 [2024-07-26 03:45:45.585514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.398 [2024-07-26 03:45:45.585534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.398 [2024-07-26 03:45:45.585550] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.398 [2024-07-26 03:45:45.585566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.398 [2024-07-26 03:45:45.585581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.398 03:45:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:17:31.398 03:45:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:31.398 [2024-07-26 03:45:45.983278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:17:31.398 [2024-07-26 03:45:45.985086] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.398 [2024-07-26 03:45:45.985167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.398 [2024-07-26 03:45:45.985188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.398 [2024-07-26 03:45:45.985218] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.398 [2024-07-26 03:45:45.985233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.398 [2024-07-26 03:45:45.985249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.398 [2024-07-26 03:45:45.985281] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.398 [2024-07-26 03:45:45.985297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.398 [2024-07-26 03:45:45.985311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.398 [2024-07-26 03:45:45.985327] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.398 [2024-07-26 03:45:45.985341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.398 [2024-07-26 03:45:45.985357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.398 03:45:46 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:17:31.398 03:45:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:31.398 03:45:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:31.398 03:45:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:31.398 03:45:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:31.398 03:45:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:31.398 03:45:46 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.398 03:45:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:31.398 03:45:46 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.398 03:45:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:31.398 03:45:46 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:31.398 03:45:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:31.398 03:45:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:31.398 03:45:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:31.398 03:45:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:31.657 03:45:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:31.657 03:45:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:31.657 03:45:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:31.657 03:45:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:31.657 03:45:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:31.657 03:45:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:31.657 03:45:46 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:43.859 03:45:58 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:17:43.859 03:45:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:17:43.859 03:45:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:17:43.859 03:45:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:43.859 03:45:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:43.859 03:45:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:43.859 03:45:58 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.859 03:45:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:43.859 03:45:58 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.859 03:45:58 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:17:43.859 03:45:58 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:43.859 03:45:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:43.859 03:45:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:43.859 [2024-07-26 03:45:58.483437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:17:43.859 [2024-07-26 03:45:58.485687] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:43.859 [2024-07-26 03:45:58.485767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:43.859 [2024-07-26 03:45:58.485794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.859 [2024-07-26 03:45:58.485823] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:43.859 [2024-07-26 03:45:58.485861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:43.859 [2024-07-26 03:45:58.485878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.859 [2024-07-26 03:45:58.485900] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:43.859 [2024-07-26 03:45:58.485916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:43.859 [2024-07-26 03:45:58.485932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.859 [2024-07-26 03:45:58.485948] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:43.859 [2024-07-26 03:45:58.485965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:43.859 [2024-07-26 03:45:58.485980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.859 03:45:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:43.859 03:45:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:43.859 03:45:58 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:43.859 03:45:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:43.859 03:45:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:43.859 03:45:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:43.859 03:45:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:43.859 03:45:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:43.859 03:45:58 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.859 03:45:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:43.859 03:45:58 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.859 03:45:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:17:43.859 03:45:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:44.118 [2024-07-26 03:45:58.983457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:17:44.118 [2024-07-26 03:45:58.985364] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:44.118 [2024-07-26 03:45:58.985442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:44.118 [2024-07-26 03:45:58.985463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.118 [2024-07-26 03:45:58.985492] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:44.118 [2024-07-26 03:45:58.985507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:44.118 [2024-07-26 03:45:58.985526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.118 [2024-07-26 03:45:58.985541] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:44.118 [2024-07-26 03:45:58.985556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:44.118 [2024-07-26 03:45:58.985570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.118 [2024-07-26 03:45:58.985586] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:44.118 [2024-07-26 03:45:58.985599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:44.118 [2024-07-26 03:45:58.985614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.377 03:45:59 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:17:44.377 03:45:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:44.377 03:45:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:44.377 03:45:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:44.377 03:45:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:44.377 03:45:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:44.377 03:45:59 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.377 03:45:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:44.377 03:45:59 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.377 03:45:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:44.377 03:45:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:44.377 03:45:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:44.377 03:45:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:44.377 03:45:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:44.635 03:45:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:44.635 03:45:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:44.635 03:45:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:44.635 03:45:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:44.635 03:45:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:44.635 03:45:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:44.635 03:45:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:44.635 03:45:59 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:56.838 03:46:11 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:17:56.838 03:46:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:17:56.838 03:46:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:17:56.838 03:46:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:56.838 03:46:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:56.838 03:46:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:56.838 03:46:11 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.838 03:46:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:56.838 03:46:11 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.838 03:46:11 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:17:56.838 03:46:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:56.838 03:46:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:56.838 03:46:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:56.838 [2024-07-26 03:46:11.483641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:17:56.838 [2024-07-26 03:46:11.486178] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:56.838 [2024-07-26 03:46:11.486383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.838 [2024-07-26 03:46:11.486578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.838 [2024-07-26 03:46:11.486843] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:56.838 [2024-07-26 03:46:11.486987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.838 [2024-07-26 03:46:11.487230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.838 [2024-07-26 03:46:11.487268] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:56.838 [2024-07-26 03:46:11.487286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.838 [2024-07-26 03:46:11.487312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.838 [2024-07-26 03:46:11.487329] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:56.838 [2024-07-26 03:46:11.487347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.838 [2024-07-26 03:46:11.487363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.838 03:46:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:56.838 03:46:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:56.838 03:46:11 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:56.838 03:46:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:56.838 03:46:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:56.838 03:46:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:56.838 03:46:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:56.838 03:46:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:56.838 03:46:11 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.838 03:46:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:56.838 03:46:11 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.838 03:46:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:17:56.838 03:46:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:57.148 [2024-07-26 03:46:11.883659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:17:57.148 [2024-07-26 03:46:11.885658] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:57.148 [2024-07-26 03:46:11.885724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.148 [2024-07-26 03:46:11.885745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.148 [2024-07-26 03:46:11.885772] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:57.148 [2024-07-26 03:46:11.885787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.148 [2024-07-26 03:46:11.885802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.148 [2024-07-26 03:46:11.885818] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:57.148 [2024-07-26 03:46:11.885866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.148 [2024-07-26 03:46:11.885883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.148 [2024-07-26 03:46:11.885901] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:57.148 [2024-07-26 03:46:11.885915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.148 [2024-07-26 03:46:11.885933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.406 03:46:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:17:57.406 03:46:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:57.406 03:46:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:57.406 03:46:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:57.407 03:46:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:57.407 03:46:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:57.407 03:46:12 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.407 03:46:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:57.407 03:46:12 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.407 03:46:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:57.407 03:46:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:57.407 03:46:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:57.407 03:46:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:57.407 03:46:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:57.407 03:46:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:57.664 03:46:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:57.664 03:46:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:57.664 03:46:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:57.664 03:46:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:57.664 03:46:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:57.664 03:46:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:57.664 03:46:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:18:09.861 03:46:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:18:09.861 03:46:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:18:09.861 03:46:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:18:09.861 03:46:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:09.861 03:46:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:09.861 03:46:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:09.861 03:46:24 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.861 03:46:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:09.861 03:46:24 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.861 03:46:24 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:18:09.861 03:46:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:09.861 03:46:24 sw_hotplug -- common/autotest_common.sh@715 -- # time=44.99 00:18:09.861 03:46:24 sw_hotplug -- common/autotest_common.sh@716 -- # echo 44.99 00:18:09.861 03:46:24 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:18:09.861 03:46:24 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.99 00:18:09.861 03:46:24 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.99 2 00:18:09.861 remove_attach_helper took 44.99s to complete (handling 2 nvme drive(s)) 03:46:24 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:18:09.861 03:46:24 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 74425 00:18:09.861 03:46:24 sw_hotplug -- common/autotest_common.sh@948 -- # '[' -z 74425 ']' 00:18:09.861 03:46:24 sw_hotplug -- common/autotest_common.sh@952 -- # kill -0 74425 00:18:09.861 03:46:24 sw_hotplug -- common/autotest_common.sh@953 -- # uname 00:18:09.861 03:46:24 sw_hotplug -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:09.861 03:46:24 sw_hotplug -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74425 00:18:09.861 03:46:24 sw_hotplug -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:09.861 killing process with pid 74425 00:18:09.861 03:46:24 sw_hotplug -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:09.861 03:46:24 sw_hotplug -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74425' 00:18:09.861 03:46:24 sw_hotplug -- common/autotest_common.sh@967 -- # kill 74425 00:18:09.861 03:46:24 sw_hotplug -- common/autotest_common.sh@972 -- # wait 74425 00:18:11.764 03:46:26 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:12.330 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:12.588 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:12.588 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:12.847 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:18:12.847 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:18:12.847 ************************************ 00:18:12.847 END TEST sw_hotplug 00:18:12.847 ************************************ 00:18:12.847 00:18:12.847 real 2m31.158s 00:18:12.847 user 1m51.497s 00:18:12.847 sys 0m19.414s 00:18:12.847 03:46:27 sw_hotplug -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:12.847 03:46:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:12.847 03:46:27 -- common/autotest_common.sh@1142 -- # return 0 00:18:12.847 03:46:27 -- spdk/autotest.sh@247 -- # [[ 1 -eq 1 ]] 00:18:12.847 03:46:27 -- spdk/autotest.sh@248 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:18:12.847 03:46:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:12.847 03:46:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:12.847 03:46:27 -- common/autotest_common.sh@10 -- # set +x 00:18:12.847 ************************************ 00:18:12.847 START TEST nvme_xnvme 00:18:12.847 ************************************ 00:18:12.847 03:46:27 nvme_xnvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:18:13.106 * Looking for test storage... 00:18:13.106 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:13.106 03:46:27 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:13.106 03:46:27 nvme_xnvme -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.106 03:46:27 nvme_xnvme -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.106 03:46:27 nvme_xnvme -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.106 03:46:27 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.106 03:46:27 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.106 03:46:27 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.106 03:46:27 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:18:13.106 03:46:27 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.106 03:46:27 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:18:13.106 03:46:27 nvme_xnvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:13.106 03:46:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:13.106 03:46:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:13.106 ************************************ 00:18:13.106 START TEST xnvme_to_malloc_dd_copy 00:18:13.106 ************************************ 00:18:13.106 03:46:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1123 -- # malloc_to_xnvme_copy 00:18:13.106 03:46:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:18:13.106 03:46:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:18:13.106 03:46:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:18:13.106 03:46:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:18:13.106 03:46:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:18:13.106 03:46:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:18:13.106 03:46:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:18:13.106 03:46:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:18:13.106 03:46:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:18:13.106 03:46:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:18:13.106 03:46:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:18:13.106 03:46:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:18:13.106 03:46:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:18:13.106 03:46:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:18:13.106 03:46:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:18:13.106 03:46:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:18:13.106 03:46:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:18:13.106 03:46:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:18:13.106 03:46:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:18:13.106 03:46:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:18:13.106 03:46:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:18:13.106 03:46:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:18:13.106 03:46:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:18:13.106 03:46:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:18:13.106 { 00:18:13.106 "subsystems": [ 00:18:13.106 { 00:18:13.106 "subsystem": "bdev", 00:18:13.106 "config": [ 00:18:13.106 { 00:18:13.106 "params": { 00:18:13.106 "block_size": 512, 00:18:13.106 "num_blocks": 2097152, 00:18:13.106 "name": "malloc0" 00:18:13.106 }, 00:18:13.106 "method": "bdev_malloc_create" 00:18:13.106 }, 00:18:13.106 { 00:18:13.106 "params": { 00:18:13.106 "io_mechanism": "libaio", 00:18:13.106 "filename": "/dev/nullb0", 00:18:13.106 "name": "null0" 00:18:13.106 }, 00:18:13.106 "method": "bdev_xnvme_create" 00:18:13.106 }, 00:18:13.106 { 00:18:13.106 "method": "bdev_wait_for_examine" 00:18:13.106 } 00:18:13.106 ] 00:18:13.106 } 00:18:13.106 ] 00:18:13.106 } 00:18:13.106 [2024-07-26 03:46:27.897474] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:18:13.106 [2024-07-26 03:46:27.898131] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75784 ] 00:18:13.364 [2024-07-26 03:46:28.073618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.623 [2024-07-26 03:46:28.301303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.854  Copying: 173/1024 [MB] (173 MBps) Copying: 348/1024 [MB] (174 MBps) Copying: 525/1024 [MB] (177 MBps) Copying: 699/1024 [MB] (173 MBps) Copying: 874/1024 [MB] (174 MBps) Copying: 1024/1024 [MB] (average 175 MBps) 00:18:23.854 00:18:24.113 03:46:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:18:24.113 03:46:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:18:24.113 03:46:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:18:24.113 03:46:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:18:24.113 { 00:18:24.113 "subsystems": [ 00:18:24.113 { 00:18:24.113 "subsystem": "bdev", 00:18:24.113 "config": [ 00:18:24.113 { 00:18:24.113 "params": { 00:18:24.113 "block_size": 512, 00:18:24.113 "num_blocks": 2097152, 00:18:24.113 "name": "malloc0" 00:18:24.113 }, 00:18:24.113 "method": "bdev_malloc_create" 00:18:24.113 }, 00:18:24.113 { 00:18:24.113 "params": { 00:18:24.113 "io_mechanism": "libaio", 00:18:24.113 "filename": "/dev/nullb0", 00:18:24.113 "name": "null0" 00:18:24.113 }, 00:18:24.113 "method": "bdev_xnvme_create" 00:18:24.113 }, 00:18:24.113 { 00:18:24.113 "method": "bdev_wait_for_examine" 00:18:24.113 } 00:18:24.113 ] 00:18:24.113 } 00:18:24.113 ] 00:18:24.113 } 00:18:24.113 [2024-07-26 03:46:38.878513] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:18:24.113 [2024-07-26 03:46:38.878684] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75905 ] 00:18:24.372 [2024-07-26 03:46:39.052028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.372 [2024-07-26 03:46:39.226369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.957  Copying: 176/1024 [MB] (176 MBps) Copying: 352/1024 [MB] (176 MBps) Copying: 522/1024 [MB] (169 MBps) Copying: 699/1024 [MB] (177 MBps) Copying: 873/1024 [MB] (173 MBps) Copying: 1024/1024 [MB] (average 174 MBps) 00:18:34.957 00:18:34.957 03:46:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:18:34.957 03:46:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:18:34.957 03:46:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:18:34.957 03:46:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:18:34.957 03:46:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:18:34.957 03:46:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:18:34.957 { 00:18:34.957 "subsystems": [ 00:18:34.957 { 00:18:34.957 "subsystem": "bdev", 00:18:34.957 "config": [ 00:18:34.957 { 00:18:34.957 "params": { 00:18:34.957 "block_size": 512, 00:18:34.957 "num_blocks": 2097152, 00:18:34.957 "name": "malloc0" 00:18:34.957 }, 00:18:34.957 "method": "bdev_malloc_create" 00:18:34.957 }, 00:18:34.957 { 00:18:34.957 "params": { 00:18:34.957 "io_mechanism": "io_uring", 00:18:34.957 "filename": "/dev/nullb0", 00:18:34.957 "name": "null0" 00:18:34.957 }, 00:18:34.957 "method": "bdev_xnvme_create" 00:18:34.957 }, 00:18:34.957 { 00:18:34.957 "method": "bdev_wait_for_examine" 00:18:34.957 } 00:18:34.957 ] 00:18:34.957 } 00:18:34.957 ] 00:18:34.957 } 00:18:34.957 [2024-07-26 03:46:49.856105] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:18:34.957 [2024-07-26 03:46:49.856296] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76030 ] 00:18:35.246 [2024-07-26 03:46:50.034186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.504 [2024-07-26 03:46:50.243077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.350  Copying: 191/1024 [MB] (191 MBps) Copying: 380/1024 [MB] (188 MBps) Copying: 562/1024 [MB] (182 MBps) Copying: 738/1024 [MB] (176 MBps) Copying: 920/1024 [MB] (182 MBps) Copying: 1024/1024 [MB] (average 183 MBps) 00:18:46.350 00:18:46.350 03:47:00 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:18:46.350 03:47:00 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:18:46.350 03:47:00 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:18:46.350 03:47:00 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:18:46.350 { 00:18:46.350 "subsystems": [ 00:18:46.350 { 00:18:46.350 "subsystem": "bdev", 00:18:46.350 "config": [ 00:18:46.350 { 00:18:46.350 "params": { 00:18:46.350 "block_size": 512, 00:18:46.350 "num_blocks": 2097152, 00:18:46.350 "name": "malloc0" 00:18:46.350 }, 00:18:46.350 "method": "bdev_malloc_create" 00:18:46.350 }, 00:18:46.350 { 00:18:46.350 "params": { 00:18:46.350 "io_mechanism": "io_uring", 00:18:46.350 "filename": "/dev/nullb0", 00:18:46.350 "name": "null0" 00:18:46.350 }, 00:18:46.350 "method": "bdev_xnvme_create" 00:18:46.350 }, 00:18:46.350 { 00:18:46.350 "method": "bdev_wait_for_examine" 00:18:46.350 } 00:18:46.350 ] 00:18:46.350 } 00:18:46.350 ] 00:18:46.350 } 00:18:46.350 [2024-07-26 03:47:00.584353] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:18:46.350 [2024-07-26 03:47:00.584515] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76146 ] 00:18:46.350 [2024-07-26 03:47:00.759861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.350 [2024-07-26 03:47:00.987203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.213  Copying: 185/1024 [MB] (185 MBps) Copying: 376/1024 [MB] (190 MBps) Copying: 568/1024 [MB] (192 MBps) Copying: 760/1024 [MB] (192 MBps) Copying: 948/1024 [MB] (188 MBps) Copying: 1024/1024 [MB] (average 189 MBps) 00:18:56.213 00:18:56.472 03:47:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:18:56.472 03:47:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:18:56.472 00:18:56.472 real 0m43.391s 00:18:56.472 ************************************ 00:18:56.472 END TEST xnvme_to_malloc_dd_copy 00:18:56.472 ************************************ 00:18:56.472 user 0m38.131s 00:18:56.472 sys 0m4.657s 00:18:56.472 03:47:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:56.472 03:47:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:18:56.472 03:47:11 nvme_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:18:56.472 03:47:11 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:18:56.472 03:47:11 nvme_xnvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:56.472 03:47:11 nvme_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:56.472 03:47:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:56.472 ************************************ 00:18:56.472 START TEST xnvme_bdevperf 00:18:56.472 ************************************ 00:18:56.472 03:47:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1123 -- # xnvme_bdevperf 00:18:56.472 03:47:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:18:56.472 03:47:11 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:18:56.472 03:47:11 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:18:56.472 03:47:11 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:18:56.472 03:47:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:18:56.472 03:47:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:18:56.472 03:47:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:18:56.472 03:47:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:18:56.472 03:47:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:18:56.472 03:47:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:18:56.472 03:47:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:18:56.472 03:47:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:18:56.472 03:47:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:18:56.472 03:47:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:18:56.472 03:47:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:18:56.472 03:47:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:18:56.472 03:47:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:18:56.472 03:47:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:18:56.472 03:47:11 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:56.472 03:47:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:56.472 { 00:18:56.472 "subsystems": [ 00:18:56.472 { 00:18:56.472 "subsystem": "bdev", 00:18:56.472 "config": [ 00:18:56.472 { 00:18:56.472 "params": { 00:18:56.472 "io_mechanism": "libaio", 00:18:56.472 "filename": "/dev/nullb0", 00:18:56.472 "name": "null0" 00:18:56.472 }, 00:18:56.472 "method": "bdev_xnvme_create" 00:18:56.472 }, 00:18:56.472 { 00:18:56.472 "method": "bdev_wait_for_examine" 00:18:56.473 } 00:18:56.473 ] 00:18:56.473 } 00:18:56.473 ] 00:18:56.473 } 00:18:56.473 [2024-07-26 03:47:11.346580] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:18:56.473 [2024-07-26 03:47:11.346763] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76294 ] 00:18:56.732 [2024-07-26 03:47:11.520229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.991 [2024-07-26 03:47:11.704673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.250 Running I/O for 5 seconds... 00:19:02.526 00:19:02.526 Latency(us) 00:19:02.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.526 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:19:02.526 null0 : 5.00 105857.21 413.50 0.00 0.00 600.96 198.28 1414.98 00:19:02.526 =================================================================================================================== 00:19:02.526 Total : 105857.21 413.50 0.00 0.00 600.96 198.28 1414.98 00:19:03.462 03:47:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:19:03.462 03:47:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:19:03.462 03:47:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:19:03.462 03:47:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:19:03.462 03:47:18 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:03.462 03:47:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:03.462 { 00:19:03.462 "subsystems": [ 00:19:03.462 { 00:19:03.462 "subsystem": "bdev", 00:19:03.462 "config": [ 00:19:03.462 { 00:19:03.462 "params": { 00:19:03.462 "io_mechanism": "io_uring", 00:19:03.462 "filename": "/dev/nullb0", 00:19:03.462 "name": "null0" 00:19:03.462 }, 00:19:03.462 "method": "bdev_xnvme_create" 00:19:03.462 }, 00:19:03.462 { 00:19:03.462 "method": "bdev_wait_for_examine" 00:19:03.462 } 00:19:03.462 ] 00:19:03.462 } 00:19:03.462 ] 00:19:03.462 } 00:19:03.462 [2024-07-26 03:47:18.267396] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:19:03.462 [2024-07-26 03:47:18.267555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76372 ] 00:19:03.721 [2024-07-26 03:47:18.434561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.980 [2024-07-26 03:47:18.664363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.239 Running I/O for 5 seconds... 00:19:09.515 00:19:09.515 Latency(us) 00:19:09.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.515 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:19:09.515 null0 : 5.00 144095.00 562.87 0.00 0.00 440.65 258.79 2636.33 00:19:09.515 =================================================================================================================== 00:19:09.515 Total : 144095.00 562.87 0.00 0.00 440.65 258.79 2636.33 00:19:10.449 03:47:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:19:10.449 03:47:25 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:19:10.449 ************************************ 00:19:10.449 END TEST xnvme_bdevperf 00:19:10.449 ************************************ 00:19:10.449 00:19:10.449 real 0m13.887s 00:19:10.449 user 0m10.714s 00:19:10.449 sys 0m2.940s 00:19:10.449 03:47:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:10.449 03:47:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:10.449 03:47:25 nvme_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:19:10.449 ************************************ 00:19:10.449 END TEST nvme_xnvme 00:19:10.449 ************************************ 00:19:10.449 00:19:10.449 real 0m57.459s 00:19:10.449 user 0m48.920s 00:19:10.449 sys 0m7.696s 00:19:10.449 03:47:25 nvme_xnvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:10.449 03:47:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:10.449 03:47:25 -- common/autotest_common.sh@1142 -- # return 0 00:19:10.449 03:47:25 -- spdk/autotest.sh@249 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:19:10.449 03:47:25 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:10.449 03:47:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:10.449 03:47:25 -- common/autotest_common.sh@10 -- # set +x 00:19:10.449 ************************************ 00:19:10.449 START TEST blockdev_xnvme 00:19:10.449 ************************************ 00:19:10.449 03:47:25 blockdev_xnvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:19:10.450 * Looking for test storage... 00:19:10.450 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:10.450 03:47:25 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:10.450 03:47:25 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:19:10.450 03:47:25 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:10.450 03:47:25 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:10.450 03:47:25 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:10.450 03:47:25 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:10.450 03:47:25 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:10.450 03:47:25 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:10.450 03:47:25 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:19:10.450 03:47:25 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:19:10.450 03:47:25 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:19:10.450 03:47:25 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:19:10.450 03:47:25 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:19:10.450 03:47:25 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:19:10.450 03:47:25 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:19:10.450 03:47:25 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:19:10.450 03:47:25 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:19:10.450 03:47:25 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:19:10.450 03:47:25 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:19:10.450 03:47:25 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:19:10.450 03:47:25 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:19:10.450 03:47:25 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:19:10.450 03:47:25 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:19:10.450 03:47:25 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:19:10.450 03:47:25 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=76512 00:19:10.450 03:47:25 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:10.450 03:47:25 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:10.450 03:47:25 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 76512 00:19:10.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.450 03:47:25 blockdev_xnvme -- common/autotest_common.sh@829 -- # '[' -z 76512 ']' 00:19:10.450 03:47:25 blockdev_xnvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.450 03:47:25 blockdev_xnvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:10.450 03:47:25 blockdev_xnvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.450 03:47:25 blockdev_xnvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:10.450 03:47:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:10.708 [2024-07-26 03:47:25.387977] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:19:10.708 [2024-07-26 03:47:25.388386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76512 ] 00:19:10.708 [2024-07-26 03:47:25.564216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.966 [2024-07-26 03:47:25.749476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.900 03:47:26 blockdev_xnvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:11.900 03:47:26 blockdev_xnvme -- common/autotest_common.sh@862 -- # return 0 00:19:11.900 03:47:26 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:19:11.900 03:47:26 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:19:11.900 03:47:26 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:19:11.900 03:47:26 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:19:11.900 03:47:26 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:11.900 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:12.158 Waiting for block devices as requested 00:19:12.158 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:12.416 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:12.416 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:19:12.416 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:19:17.687 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:19:17.687 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@1670 -- # local nvme bdf 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:17.687 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:17.687 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:19:17.687 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:17.687 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:19:17.687 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:17.687 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:19:17.687 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:17.687 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:19:17.687 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:17.687 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:19:17.687 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:17.687 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:19:17.687 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:17.687 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:19:17.687 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:17.687 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:19:17.687 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:17.687 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:19:17.687 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:17.687 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:19:17.687 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:17.687 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:19:17.687 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:17.687 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:19:17.687 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:19:17.687 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:17.687 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:19:17.687 nvme0n1 00:19:17.687 nvme1n1 00:19:17.687 nvme2n1 00:19:17.687 nvme2n2 00:19:17.687 nvme2n3 00:19:17.687 nvme3n1 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.687 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.687 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:19:17.687 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.687 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.687 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.687 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:19:17.687 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.687 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:17.687 03:47:32 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.687 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:19:17.688 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "dba3d3c0-9d30-430a-9861-2bdc6db15db7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "dba3d3c0-9d30-430a-9861-2bdc6db15db7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "e467e0f9-1356-43e2-977e-9fc121adc82b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "e467e0f9-1356-43e2-977e-9fc121adc82b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "2e0ca2a8-084b-420e-b839-6f5c82bbfa4f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2e0ca2a8-084b-420e-b839-6f5c82bbfa4f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "cd6befb2-7d4c-4d0a-8920-9c82fdfa5b91"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cd6befb2-7d4c-4d0a-8920-9c82fdfa5b91",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "f1c9b7dc-827a-4af6-96fc-a849caa9e5d5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f1c9b7dc-827a-4af6-96fc-a849caa9e5d5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "4c507246-4c8a-480d-8c9b-5421d961df8f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "4c507246-4c8a-480d-8c9b-5421d961df8f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:19:17.688 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:19:17.947 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:19:17.947 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:19:17.947 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:19:17.947 03:47:32 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 76512 00:19:17.947 03:47:32 blockdev_xnvme -- common/autotest_common.sh@948 -- # '[' -z 76512 ']' 00:19:17.947 03:47:32 blockdev_xnvme -- common/autotest_common.sh@952 -- # kill -0 76512 00:19:17.947 03:47:32 blockdev_xnvme -- common/autotest_common.sh@953 -- # uname 00:19:17.947 03:47:32 blockdev_xnvme -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:17.947 03:47:32 blockdev_xnvme -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76512 00:19:17.947 killing process with pid 76512 00:19:17.947 03:47:32 blockdev_xnvme -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:17.947 03:47:32 blockdev_xnvme -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:17.947 03:47:32 blockdev_xnvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76512' 00:19:17.947 03:47:32 blockdev_xnvme -- common/autotest_common.sh@967 -- # kill 76512 00:19:17.947 03:47:32 blockdev_xnvme -- common/autotest_common.sh@972 -- # wait 76512 00:19:20.479 03:47:34 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:20.479 03:47:34 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:19:20.479 03:47:34 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:19:20.479 03:47:34 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:20.479 03:47:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:20.479 ************************************ 00:19:20.479 START TEST bdev_hello_world 00:19:20.479 ************************************ 00:19:20.479 03:47:34 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:19:20.479 [2024-07-26 03:47:34.880776] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:19:20.479 [2024-07-26 03:47:34.880957] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76882 ] 00:19:20.479 [2024-07-26 03:47:35.044049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.479 [2024-07-26 03:47:35.233533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.738 [2024-07-26 03:47:35.623282] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:20.738 [2024-07-26 03:47:35.623334] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:19:20.738 [2024-07-26 03:47:35.623359] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:20.738 [2024-07-26 03:47:35.625692] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:20.738 [2024-07-26 03:47:35.625968] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:20.738 [2024-07-26 03:47:35.626000] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:20.738 [2024-07-26 03:47:35.626136] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:20.738 00:19:20.738 [2024-07-26 03:47:35.626169] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:22.115 00:19:22.115 real 0m1.961s 00:19:22.115 ************************************ 00:19:22.115 END TEST bdev_hello_world 00:19:22.115 ************************************ 00:19:22.115 user 0m1.647s 00:19:22.115 sys 0m0.198s 00:19:22.115 03:47:36 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:22.115 03:47:36 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:22.115 03:47:36 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:19:22.115 03:47:36 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:19:22.115 03:47:36 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:22.115 03:47:36 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:22.115 03:47:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:22.115 ************************************ 00:19:22.115 START TEST bdev_bounds 00:19:22.115 ************************************ 00:19:22.115 03:47:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:19:22.115 03:47:36 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=76924 00:19:22.115 03:47:36 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:22.115 Process bdevio pid: 76924 00:19:22.115 03:47:36 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:22.115 03:47:36 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 76924' 00:19:22.115 03:47:36 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 76924 00:19:22.115 03:47:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 76924 ']' 00:19:22.115 03:47:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.115 03:47:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:22.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.115 03:47:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.115 03:47:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:22.115 03:47:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:22.115 [2024-07-26 03:47:36.903717] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:19:22.115 [2024-07-26 03:47:36.904147] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76924 ] 00:19:22.374 [2024-07-26 03:47:37.076473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:22.374 [2024-07-26 03:47:37.269091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.374 [2024-07-26 03:47:37.269162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.374 [2024-07-26 03:47:37.269162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:23.309 03:47:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:23.309 03:47:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:19:23.309 03:47:37 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:23.309 I/O targets: 00:19:23.309 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:19:23.309 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:19:23.309 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:23.309 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:23.309 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:23.309 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:19:23.309 00:19:23.309 00:19:23.309 CUnit - A unit testing framework for C - Version 2.1-3 00:19:23.309 http://cunit.sourceforge.net/ 00:19:23.309 00:19:23.309 00:19:23.309 Suite: bdevio tests on: nvme3n1 00:19:23.309 Test: blockdev write read block ...passed 00:19:23.309 Test: blockdev write zeroes read block ...passed 00:19:23.309 Test: blockdev write zeroes read no split ...passed 00:19:23.309 Test: blockdev write zeroes read split ...passed 00:19:23.309 Test: blockdev write zeroes read split partial ...passed 00:19:23.309 Test: blockdev reset ...passed 00:19:23.309 Test: blockdev write read 8 blocks ...passed 00:19:23.309 Test: blockdev write read size > 128k ...passed 00:19:23.309 Test: blockdev write read invalid size ...passed 00:19:23.309 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:23.309 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:23.309 Test: blockdev write read max offset ...passed 00:19:23.309 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:23.309 Test: blockdev writev readv 8 blocks ...passed 00:19:23.309 Test: blockdev writev readv 30 x 1block ...passed 00:19:23.309 Test: blockdev writev readv block ...passed 00:19:23.309 Test: blockdev writev readv size > 128k ...passed 00:19:23.309 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:23.309 Test: blockdev comparev and writev ...passed 00:19:23.309 Test: blockdev nvme passthru rw ...passed 00:19:23.309 Test: blockdev nvme passthru vendor specific ...passed 00:19:23.309 Test: blockdev nvme admin passthru ...passed 00:19:23.309 Test: blockdev copy ...passed 00:19:23.309 Suite: bdevio tests on: nvme2n3 00:19:23.309 Test: blockdev write read block ...passed 00:19:23.309 Test: blockdev write zeroes read block ...passed 00:19:23.309 Test: blockdev write zeroes read no split ...passed 00:19:23.309 Test: blockdev write zeroes read split ...passed 00:19:23.309 Test: blockdev write zeroes read split partial ...passed 00:19:23.309 Test: blockdev reset ...passed 00:19:23.309 Test: blockdev write read 8 blocks ...passed 00:19:23.309 Test: blockdev write read size > 128k ...passed 00:19:23.309 Test: blockdev write read invalid size ...passed 00:19:23.309 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:23.309 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:23.309 Test: blockdev write read max offset ...passed 00:19:23.309 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:23.309 Test: blockdev writev readv 8 blocks ...passed 00:19:23.309 Test: blockdev writev readv 30 x 1block ...passed 00:19:23.309 Test: blockdev writev readv block ...passed 00:19:23.309 Test: blockdev writev readv size > 128k ...passed 00:19:23.309 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:23.309 Test: blockdev comparev and writev ...passed 00:19:23.309 Test: blockdev nvme passthru rw ...passed 00:19:23.309 Test: blockdev nvme passthru vendor specific ...passed 00:19:23.309 Test: blockdev nvme admin passthru ...passed 00:19:23.309 Test: blockdev copy ...passed 00:19:23.309 Suite: bdevio tests on: nvme2n2 00:19:23.309 Test: blockdev write read block ...passed 00:19:23.309 Test: blockdev write zeroes read block ...passed 00:19:23.309 Test: blockdev write zeroes read no split ...passed 00:19:23.568 Test: blockdev write zeroes read split ...passed 00:19:23.568 Test: blockdev write zeroes read split partial ...passed 00:19:23.568 Test: blockdev reset ...passed 00:19:23.568 Test: blockdev write read 8 blocks ...passed 00:19:23.568 Test: blockdev write read size > 128k ...passed 00:19:23.568 Test: blockdev write read invalid size ...passed 00:19:23.568 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:23.568 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:23.568 Test: blockdev write read max offset ...passed 00:19:23.568 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:23.568 Test: blockdev writev readv 8 blocks ...passed 00:19:23.568 Test: blockdev writev readv 30 x 1block ...passed 00:19:23.568 Test: blockdev writev readv block ...passed 00:19:23.568 Test: blockdev writev readv size > 128k ...passed 00:19:23.568 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:23.568 Test: blockdev comparev and writev ...passed 00:19:23.568 Test: blockdev nvme passthru rw ...passed 00:19:23.568 Test: blockdev nvme passthru vendor specific ...passed 00:19:23.568 Test: blockdev nvme admin passthru ...passed 00:19:23.568 Test: blockdev copy ...passed 00:19:23.568 Suite: bdevio tests on: nvme2n1 00:19:23.568 Test: blockdev write read block ...passed 00:19:23.568 Test: blockdev write zeroes read block ...passed 00:19:23.568 Test: blockdev write zeroes read no split ...passed 00:19:23.568 Test: blockdev write zeroes read split ...passed 00:19:23.568 Test: blockdev write zeroes read split partial ...passed 00:19:23.568 Test: blockdev reset ...passed 00:19:23.568 Test: blockdev write read 8 blocks ...passed 00:19:23.568 Test: blockdev write read size > 128k ...passed 00:19:23.568 Test: blockdev write read invalid size ...passed 00:19:23.568 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:23.568 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:23.568 Test: blockdev write read max offset ...passed 00:19:23.568 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:23.568 Test: blockdev writev readv 8 blocks ...passed 00:19:23.568 Test: blockdev writev readv 30 x 1block ...passed 00:19:23.568 Test: blockdev writev readv block ...passed 00:19:23.568 Test: blockdev writev readv size > 128k ...passed 00:19:23.568 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:23.568 Test: blockdev comparev and writev ...passed 00:19:23.568 Test: blockdev nvme passthru rw ...passed 00:19:23.568 Test: blockdev nvme passthru vendor specific ...passed 00:19:23.568 Test: blockdev nvme admin passthru ...passed 00:19:23.568 Test: blockdev copy ...passed 00:19:23.568 Suite: bdevio tests on: nvme1n1 00:19:23.568 Test: blockdev write read block ...passed 00:19:23.568 Test: blockdev write zeroes read block ...passed 00:19:23.568 Test: blockdev write zeroes read no split ...passed 00:19:23.568 Test: blockdev write zeroes read split ...passed 00:19:23.568 Test: blockdev write zeroes read split partial ...passed 00:19:23.568 Test: blockdev reset ...passed 00:19:23.568 Test: blockdev write read 8 blocks ...passed 00:19:23.568 Test: blockdev write read size > 128k ...passed 00:19:23.568 Test: blockdev write read invalid size ...passed 00:19:23.568 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:23.568 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:23.568 Test: blockdev write read max offset ...passed 00:19:23.568 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:23.568 Test: blockdev writev readv 8 blocks ...passed 00:19:23.568 Test: blockdev writev readv 30 x 1block ...passed 00:19:23.568 Test: blockdev writev readv block ...passed 00:19:23.568 Test: blockdev writev readv size > 128k ...passed 00:19:23.568 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:23.568 Test: blockdev comparev and writev ...passed 00:19:23.568 Test: blockdev nvme passthru rw ...passed 00:19:23.568 Test: blockdev nvme passthru vendor specific ...passed 00:19:23.568 Test: blockdev nvme admin passthru ...passed 00:19:23.568 Test: blockdev copy ...passed 00:19:23.568 Suite: bdevio tests on: nvme0n1 00:19:23.568 Test: blockdev write read block ...passed 00:19:23.568 Test: blockdev write zeroes read block ...passed 00:19:23.568 Test: blockdev write zeroes read no split ...passed 00:19:23.827 Test: blockdev write zeroes read split ...passed 00:19:23.827 Test: blockdev write zeroes read split partial ...passed 00:19:23.827 Test: blockdev reset ...passed 00:19:23.827 Test: blockdev write read 8 blocks ...passed 00:19:23.827 Test: blockdev write read size > 128k ...passed 00:19:23.827 Test: blockdev write read invalid size ...passed 00:19:23.827 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:23.827 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:23.827 Test: blockdev write read max offset ...passed 00:19:23.827 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:23.827 Test: blockdev writev readv 8 blocks ...passed 00:19:23.827 Test: blockdev writev readv 30 x 1block ...passed 00:19:23.827 Test: blockdev writev readv block ...passed 00:19:23.827 Test: blockdev writev readv size > 128k ...passed 00:19:23.827 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:23.827 Test: blockdev comparev and writev ...passed 00:19:23.827 Test: blockdev nvme passthru rw ...passed 00:19:23.827 Test: blockdev nvme passthru vendor specific ...passed 00:19:23.827 Test: blockdev nvme admin passthru ...passed 00:19:23.827 Test: blockdev copy ...passed 00:19:23.827 00:19:23.827 Run Summary: Type Total Ran Passed Failed Inactive 00:19:23.827 suites 6 6 n/a 0 0 00:19:23.827 tests 138 138 138 0 0 00:19:23.827 asserts 780 780 780 0 n/a 00:19:23.827 00:19:23.827 Elapsed time = 1.292 seconds 00:19:23.827 0 00:19:23.827 03:47:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 76924 00:19:23.827 03:47:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 76924 ']' 00:19:23.827 03:47:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 76924 00:19:23.827 03:47:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:19:23.827 03:47:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:23.827 03:47:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76924 00:19:23.827 03:47:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:23.827 03:47:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:23.827 03:47:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76924' 00:19:23.827 killing process with pid 76924 00:19:23.827 03:47:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 76924 00:19:23.827 03:47:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 76924 00:19:25.226 03:47:39 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:25.226 00:19:25.226 real 0m2.934s 00:19:25.226 user 0m7.080s 00:19:25.226 sys 0m0.377s 00:19:25.226 03:47:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:25.226 03:47:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:25.226 ************************************ 00:19:25.226 END TEST bdev_bounds 00:19:25.226 ************************************ 00:19:25.226 03:47:39 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:19:25.226 03:47:39 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:19:25.226 03:47:39 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:25.226 03:47:39 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:25.226 03:47:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:25.226 ************************************ 00:19:25.226 START TEST bdev_nbd 00:19:25.226 ************************************ 00:19:25.226 03:47:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:19:25.226 03:47:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:25.226 03:47:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:25.226 03:47:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:25.226 03:47:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:25.226 03:47:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:19:25.226 03:47:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:25.226 03:47:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:19:25.226 03:47:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:25.226 03:47:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:25.226 03:47:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:25.226 03:47:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:19:25.226 03:47:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:25.226 03:47:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:25.226 03:47:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:19:25.226 03:47:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:25.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:25.226 03:47:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=76984 00:19:25.226 03:47:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:25.226 03:47:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:25.226 03:47:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 76984 /var/tmp/spdk-nbd.sock 00:19:25.226 03:47:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 76984 ']' 00:19:25.226 03:47:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:25.226 03:47:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:25.226 03:47:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:25.226 03:47:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:25.226 03:47:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:25.226 [2024-07-26 03:47:39.917479] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:19:25.226 [2024-07-26 03:47:39.917630] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:25.226 [2024-07-26 03:47:40.079236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.497 [2024-07-26 03:47:40.268307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.064 03:47:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:26.064 03:47:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:19:26.064 03:47:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:19:26.064 03:47:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:26.064 03:47:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:19:26.064 03:47:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:26.064 03:47:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:19:26.064 03:47:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:26.064 03:47:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:19:26.064 03:47:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:26.064 03:47:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:26.064 03:47:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:26.064 03:47:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:26.064 03:47:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:26.064 03:47:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:19:26.323 03:47:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:26.323 03:47:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:26.323 03:47:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:26.323 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:19:26.323 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:19:26.323 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:26.323 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:26.323 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:19:26.323 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:19:26.323 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:26.323 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:26.323 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:26.323 1+0 records in 00:19:26.323 1+0 records out 00:19:26.323 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000550028 s, 7.4 MB/s 00:19:26.324 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:26.324 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:19:26.324 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:26.581 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:26.581 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:19:26.581 03:47:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:26.581 03:47:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:26.581 03:47:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:19:26.840 03:47:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:19:26.840 03:47:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:19:26.840 03:47:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:19:26.840 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:19:26.840 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:19:26.840 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:26.840 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:26.840 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:19:26.840 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:19:26.840 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:26.840 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:26.840 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:26.840 1+0 records in 00:19:26.840 1+0 records out 00:19:26.840 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000527826 s, 7.8 MB/s 00:19:26.840 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:26.840 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:19:26.840 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:26.840 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:26.840 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:19:26.840 03:47:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:26.840 03:47:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:26.840 03:47:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:19:27.098 03:47:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:19:27.098 03:47:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:19:27.098 03:47:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:19:27.098 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:19:27.098 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:19:27.098 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:27.098 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:27.098 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:19:27.098 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:19:27.098 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:27.098 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:27.098 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:27.098 1+0 records in 00:19:27.098 1+0 records out 00:19:27.098 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000692903 s, 5.9 MB/s 00:19:27.098 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:27.098 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:19:27.098 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:27.098 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:27.098 03:47:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:19:27.098 03:47:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:27.098 03:47:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:27.098 03:47:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:19:27.356 03:47:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:19:27.356 03:47:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:19:27.356 03:47:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:19:27.356 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:19:27.356 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:19:27.356 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:27.356 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:27.356 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:19:27.356 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:19:27.357 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:27.357 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:27.357 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:27.357 1+0 records in 00:19:27.357 1+0 records out 00:19:27.357 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000583569 s, 7.0 MB/s 00:19:27.357 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:27.357 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:19:27.357 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:27.357 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:27.357 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:19:27.357 03:47:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:27.357 03:47:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:27.357 03:47:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:19:27.615 03:47:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:19:27.615 03:47:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:19:27.616 03:47:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:19:27.616 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:19:27.616 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:19:27.616 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:27.616 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:27.616 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:19:27.616 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:19:27.616 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:27.616 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:27.616 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:27.616 1+0 records in 00:19:27.616 1+0 records out 00:19:27.616 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000516179 s, 7.9 MB/s 00:19:27.616 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:27.616 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:19:27.616 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:27.616 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:27.616 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:19:27.616 03:47:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:27.616 03:47:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:27.616 03:47:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:19:27.874 03:47:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:19:27.874 03:47:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:19:27.874 03:47:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:19:27.874 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:19:27.874 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:19:27.874 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:27.874 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:27.874 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:19:27.874 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:19:27.874 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:27.874 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:27.874 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:27.874 1+0 records in 00:19:27.874 1+0 records out 00:19:27.874 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000706175 s, 5.8 MB/s 00:19:27.874 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:27.874 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:19:27.874 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:27.874 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:27.874 03:47:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:19:27.874 03:47:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:27.874 03:47:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:27.874 03:47:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:28.132 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:28.132 { 00:19:28.132 "nbd_device": "/dev/nbd0", 00:19:28.132 "bdev_name": "nvme0n1" 00:19:28.132 }, 00:19:28.132 { 00:19:28.132 "nbd_device": "/dev/nbd1", 00:19:28.132 "bdev_name": "nvme1n1" 00:19:28.132 }, 00:19:28.132 { 00:19:28.132 "nbd_device": "/dev/nbd2", 00:19:28.132 "bdev_name": "nvme2n1" 00:19:28.132 }, 00:19:28.132 { 00:19:28.132 "nbd_device": "/dev/nbd3", 00:19:28.132 "bdev_name": "nvme2n2" 00:19:28.132 }, 00:19:28.132 { 00:19:28.132 "nbd_device": "/dev/nbd4", 00:19:28.132 "bdev_name": "nvme2n3" 00:19:28.132 }, 00:19:28.132 { 00:19:28.132 "nbd_device": "/dev/nbd5", 00:19:28.132 "bdev_name": "nvme3n1" 00:19:28.132 } 00:19:28.132 ]' 00:19:28.132 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:28.132 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:28.132 { 00:19:28.132 "nbd_device": "/dev/nbd0", 00:19:28.132 "bdev_name": "nvme0n1" 00:19:28.132 }, 00:19:28.132 { 00:19:28.132 "nbd_device": "/dev/nbd1", 00:19:28.132 "bdev_name": "nvme1n1" 00:19:28.132 }, 00:19:28.132 { 00:19:28.132 "nbd_device": "/dev/nbd2", 00:19:28.132 "bdev_name": "nvme2n1" 00:19:28.132 }, 00:19:28.132 { 00:19:28.132 "nbd_device": "/dev/nbd3", 00:19:28.132 "bdev_name": "nvme2n2" 00:19:28.132 }, 00:19:28.132 { 00:19:28.132 "nbd_device": "/dev/nbd4", 00:19:28.132 "bdev_name": "nvme2n3" 00:19:28.132 }, 00:19:28.132 { 00:19:28.132 "nbd_device": "/dev/nbd5", 00:19:28.132 "bdev_name": "nvme3n1" 00:19:28.132 } 00:19:28.132 ]' 00:19:28.132 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:28.390 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:19:28.390 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:28.390 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:19:28.390 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:28.390 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:28.390 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:28.390 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:28.654 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:28.654 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:28.654 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:28.654 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:28.654 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:28.654 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:28.654 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:28.654 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:28.654 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:28.654 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:28.912 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:28.912 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:28.912 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:28.912 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:28.912 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:28.912 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:28.912 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:28.912 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:28.912 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:28.912 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:19:29.170 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:19:29.170 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:19:29.170 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:19:29.170 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:29.170 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:29.170 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:19:29.170 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:29.170 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:29.170 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:29.170 03:47:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:19:29.429 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:19:29.429 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:19:29.429 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:19:29.429 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:29.429 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:29.429 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:19:29.429 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:29.429 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:29.429 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:29.429 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:19:29.687 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:19:29.687 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:19:29.687 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:19:29.687 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:29.687 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:29.687 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:19:29.687 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:29.687 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:29.687 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:29.687 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:19:29.945 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:19:29.945 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:19:29.945 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:19:29.945 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:29.945 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:29.945 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:19:29.945 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:29.945 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:29.945 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:29.945 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:29.945 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:30.203 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:30.203 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:30.203 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:30.203 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:30.203 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:30.203 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:30.203 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:30.203 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:30.203 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:30.203 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:30.203 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:30.203 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:30.203 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:30.203 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:30.203 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:19:30.203 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:30.203 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:30.203 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:30.203 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:30.203 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:30.203 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:19:30.203 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:30.203 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:30.203 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:30.203 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:30.203 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:30.203 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:30.203 03:47:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:19:30.461 /dev/nbd0 00:19:30.461 03:47:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:30.461 03:47:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:30.461 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:19:30.461 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:19:30.461 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:30.461 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:30.461 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:19:30.461 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:19:30.461 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:30.461 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:30.461 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:30.461 1+0 records in 00:19:30.461 1+0 records out 00:19:30.461 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00067711 s, 6.0 MB/s 00:19:30.461 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.461 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:19:30.461 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.461 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:30.461 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:19:30.461 03:47:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:30.461 03:47:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:30.461 03:47:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:19:30.720 /dev/nbd1 00:19:30.720 03:47:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:30.720 03:47:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:30.720 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:19:30.720 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:19:30.720 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:30.720 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:30.720 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:19:30.720 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:19:30.720 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:30.720 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:30.720 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:30.720 1+0 records in 00:19:30.720 1+0 records out 00:19:30.720 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000497833 s, 8.2 MB/s 00:19:30.720 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.720 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:19:30.720 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.720 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:30.720 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:19:30.720 03:47:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:30.720 03:47:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:30.720 03:47:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:19:30.979 /dev/nbd10 00:19:30.979 03:47:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:19:30.979 03:47:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:19:30.979 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:19:30.979 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:19:30.979 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:30.979 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:30.979 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:19:30.979 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:19:30.979 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:30.979 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:30.979 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:30.979 1+0 records in 00:19:30.979 1+0 records out 00:19:30.979 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000546673 s, 7.5 MB/s 00:19:30.979 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.979 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:19:30.979 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.979 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:30.979 03:47:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:19:30.979 03:47:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:30.979 03:47:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:30.979 03:47:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:19:31.238 /dev/nbd11 00:19:31.238 03:47:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:19:31.238 03:47:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:19:31.238 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:19:31.238 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:19:31.238 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:31.238 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:31.238 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:19:31.238 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:19:31.238 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:31.238 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:31.238 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:31.238 1+0 records in 00:19:31.238 1+0 records out 00:19:31.238 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000719823 s, 5.7 MB/s 00:19:31.238 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:31.238 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:19:31.238 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:31.238 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:31.238 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:19:31.238 03:47:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:31.238 03:47:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:31.238 03:47:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:19:31.497 /dev/nbd12 00:19:31.497 03:47:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:19:31.497 03:47:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:19:31.497 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:19:31.497 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:19:31.497 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:31.497 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:31.497 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:19:31.497 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:19:31.497 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:31.497 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:31.497 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:31.497 1+0 records in 00:19:31.497 1+0 records out 00:19:31.497 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000499185 s, 8.2 MB/s 00:19:31.497 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:31.497 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:19:31.497 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:31.497 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:31.497 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:19:31.497 03:47:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:31.497 03:47:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:31.497 03:47:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:19:32.065 /dev/nbd13 00:19:32.065 03:47:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:19:32.065 03:47:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:19:32.065 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:19:32.065 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:19:32.065 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:32.065 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:32.065 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:19:32.065 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:19:32.065 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:32.065 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:32.065 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:32.065 1+0 records in 00:19:32.065 1+0 records out 00:19:32.065 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000715326 s, 5.7 MB/s 00:19:32.065 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:32.065 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:19:32.065 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:32.065 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:32.065 03:47:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:19:32.065 03:47:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:32.065 03:47:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:32.065 03:47:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:32.065 03:47:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:32.065 03:47:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:32.324 03:47:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:32.324 { 00:19:32.324 "nbd_device": "/dev/nbd0", 00:19:32.324 "bdev_name": "nvme0n1" 00:19:32.324 }, 00:19:32.324 { 00:19:32.324 "nbd_device": "/dev/nbd1", 00:19:32.324 "bdev_name": "nvme1n1" 00:19:32.324 }, 00:19:32.324 { 00:19:32.324 "nbd_device": "/dev/nbd10", 00:19:32.324 "bdev_name": "nvme2n1" 00:19:32.324 }, 00:19:32.324 { 00:19:32.324 "nbd_device": "/dev/nbd11", 00:19:32.324 "bdev_name": "nvme2n2" 00:19:32.324 }, 00:19:32.324 { 00:19:32.324 "nbd_device": "/dev/nbd12", 00:19:32.325 "bdev_name": "nvme2n3" 00:19:32.325 }, 00:19:32.325 { 00:19:32.325 "nbd_device": "/dev/nbd13", 00:19:32.325 "bdev_name": "nvme3n1" 00:19:32.325 } 00:19:32.325 ]' 00:19:32.325 03:47:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:32.325 { 00:19:32.325 "nbd_device": "/dev/nbd0", 00:19:32.325 "bdev_name": "nvme0n1" 00:19:32.325 }, 00:19:32.325 { 00:19:32.325 "nbd_device": "/dev/nbd1", 00:19:32.325 "bdev_name": "nvme1n1" 00:19:32.325 }, 00:19:32.325 { 00:19:32.325 "nbd_device": "/dev/nbd10", 00:19:32.325 "bdev_name": "nvme2n1" 00:19:32.325 }, 00:19:32.325 { 00:19:32.325 "nbd_device": "/dev/nbd11", 00:19:32.325 "bdev_name": "nvme2n2" 00:19:32.325 }, 00:19:32.325 { 00:19:32.325 "nbd_device": "/dev/nbd12", 00:19:32.325 "bdev_name": "nvme2n3" 00:19:32.325 }, 00:19:32.325 { 00:19:32.325 "nbd_device": "/dev/nbd13", 00:19:32.325 "bdev_name": "nvme3n1" 00:19:32.325 } 00:19:32.325 ]' 00:19:32.325 03:47:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:32.325 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:19:32.325 /dev/nbd1 00:19:32.325 /dev/nbd10 00:19:32.325 /dev/nbd11 00:19:32.325 /dev/nbd12 00:19:32.325 /dev/nbd13' 00:19:32.325 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:19:32.325 /dev/nbd1 00:19:32.325 /dev/nbd10 00:19:32.325 /dev/nbd11 00:19:32.325 /dev/nbd12 00:19:32.325 /dev/nbd13' 00:19:32.325 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:32.325 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:19:32.325 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:19:32.325 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:19:32.325 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:19:32.325 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:19:32.325 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:32.325 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:32.325 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:32.325 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:32.325 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:32.325 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:32.325 256+0 records in 00:19:32.325 256+0 records out 00:19:32.325 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00703693 s, 149 MB/s 00:19:32.325 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:32.325 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:32.325 256+0 records in 00:19:32.325 256+0 records out 00:19:32.325 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131941 s, 7.9 MB/s 00:19:32.325 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:32.325 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:19:32.583 256+0 records in 00:19:32.583 256+0 records out 00:19:32.583 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154083 s, 6.8 MB/s 00:19:32.583 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:32.583 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:19:32.843 256+0 records in 00:19:32.843 256+0 records out 00:19:32.843 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147819 s, 7.1 MB/s 00:19:32.843 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:32.843 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:19:32.843 256+0 records in 00:19:32.843 256+0 records out 00:19:32.843 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148068 s, 7.1 MB/s 00:19:32.843 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:32.843 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:19:33.101 256+0 records in 00:19:33.101 256+0 records out 00:19:33.101 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140458 s, 7.5 MB/s 00:19:33.101 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:33.101 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:19:33.101 256+0 records in 00:19:33.101 256+0 records out 00:19:33.101 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156951 s, 6.7 MB/s 00:19:33.101 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:19:33.101 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:33.101 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:33.101 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:33.101 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:33.101 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:33.101 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:33.101 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:33.101 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:33.101 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:33.101 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:19:33.101 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:33.101 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:19:33.101 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:33.101 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:19:33.101 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:33.101 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:19:33.101 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:33.101 03:47:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:19:33.359 03:47:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:33.359 03:47:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:33.359 03:47:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:33.359 03:47:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:33.359 03:47:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:33.359 03:47:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:33.359 03:47:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:33.359 03:47:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:33.617 03:47:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:33.617 03:47:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:33.617 03:47:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:33.617 03:47:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:33.617 03:47:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:33.617 03:47:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:33.617 03:47:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:33.617 03:47:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:33.617 03:47:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:33.617 03:47:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:33.875 03:47:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:33.875 03:47:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:33.875 03:47:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:33.875 03:47:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:33.875 03:47:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:33.875 03:47:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:33.875 03:47:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:33.875 03:47:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:33.875 03:47:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:33.875 03:47:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:19:34.133 03:47:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:19:34.133 03:47:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:19:34.133 03:47:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:19:34.133 03:47:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:34.133 03:47:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:34.133 03:47:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:19:34.133 03:47:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:34.133 03:47:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:34.133 03:47:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:34.133 03:47:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:19:34.391 03:47:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:19:34.391 03:47:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:19:34.391 03:47:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:19:34.391 03:47:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:34.391 03:47:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:34.391 03:47:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:19:34.391 03:47:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:34.391 03:47:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:34.391 03:47:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:34.391 03:47:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:19:34.649 03:47:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:19:34.649 03:47:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:19:34.649 03:47:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:19:34.649 03:47:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:34.649 03:47:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:34.649 03:47:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:19:34.649 03:47:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:34.649 03:47:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:34.649 03:47:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:34.649 03:47:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:19:34.908 03:47:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:19:34.908 03:47:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:19:34.908 03:47:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:19:34.908 03:47:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:34.908 03:47:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:34.908 03:47:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:19:34.908 03:47:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:34.908 03:47:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:34.908 03:47:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:34.908 03:47:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:34.908 03:47:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:35.166 03:47:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:35.166 03:47:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:35.166 03:47:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:35.166 03:47:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:35.166 03:47:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:35.166 03:47:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:35.166 03:47:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:35.166 03:47:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:35.166 03:47:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:35.166 03:47:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:35.166 03:47:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:35.166 03:47:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:35.166 03:47:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:35.166 03:47:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:35.166 03:47:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:35.166 03:47:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:19:35.166 03:47:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:19:35.166 03:47:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:35.733 malloc_lvol_verify 00:19:35.733 03:47:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:35.733 c4e5c29c-966d-4cb4-a91d-0efc4fb1aac2 00:19:35.733 03:47:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:35.990 5771692b-f913-41d7-aba2-1eeffdb50f9a 00:19:35.990 03:47:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:36.249 /dev/nbd0 00:19:36.249 03:47:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:19:36.249 mke2fs 1.46.5 (30-Dec-2021) 00:19:36.249 Discarding device blocks: 0/4096 done 00:19:36.249 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:36.249 00:19:36.249 Allocating group tables: 0/1 done 00:19:36.249 Writing inode tables: 0/1 done 00:19:36.249 Creating journal (1024 blocks): done 00:19:36.249 Writing superblocks and filesystem accounting information: 0/1 done 00:19:36.249 00:19:36.249 03:47:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:19:36.249 03:47:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:36.249 03:47:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:36.249 03:47:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:36.249 03:47:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:36.249 03:47:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:36.249 03:47:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:36.249 03:47:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:36.507 03:47:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:36.507 03:47:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:36.507 03:47:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:36.507 03:47:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:36.507 03:47:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:36.507 03:47:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:36.766 03:47:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:36.766 03:47:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:36.766 03:47:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:19:36.766 03:47:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:19:36.766 03:47:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 76984 00:19:36.766 03:47:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 76984 ']' 00:19:36.766 03:47:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 76984 00:19:36.766 03:47:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:19:36.766 03:47:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:36.766 03:47:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76984 00:19:36.766 03:47:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:36.766 03:47:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:36.766 killing process with pid 76984 00:19:36.766 03:47:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76984' 00:19:36.766 03:47:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@967 -- # kill 76984 00:19:36.766 03:47:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # wait 76984 00:19:38.139 03:47:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:38.139 00:19:38.139 real 0m12.823s 00:19:38.139 user 0m18.174s 00:19:38.139 sys 0m4.228s 00:19:38.139 03:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:38.139 03:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:38.139 ************************************ 00:19:38.139 END TEST bdev_nbd 00:19:38.139 ************************************ 00:19:38.139 03:47:52 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:19:38.139 03:47:52 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:19:38.139 03:47:52 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:19:38.139 03:47:52 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:19:38.139 03:47:52 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:19:38.139 03:47:52 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:38.139 03:47:52 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:38.139 03:47:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:38.139 ************************************ 00:19:38.139 START TEST bdev_fio 00:19:38.139 ************************************ 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:38.139 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:38.139 ************************************ 00:19:38.139 START TEST bdev_fio_rw_verify 00:19:38.139 ************************************ 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:38.139 03:47:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:38.139 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:38.139 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:38.139 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:38.139 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:38.140 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:38.140 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:38.140 fio-3.35 00:19:38.140 Starting 6 threads 00:19:50.363 00:19:50.363 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=77407: Fri Jul 26 03:48:03 2024 00:19:50.363 read: IOPS=26.2k, BW=102MiB/s (107MB/s)(1022MiB/10001msec) 00:19:50.363 slat (usec): min=3, max=3435, avg= 7.34, stdev= 9.19 00:19:50.363 clat (usec): min=130, max=10129, avg=717.34, stdev=287.48 00:19:50.363 lat (usec): min=136, max=10138, avg=724.68, stdev=288.17 00:19:50.363 clat percentiles (usec): 00:19:50.363 | 50.000th=[ 734], 99.000th=[ 1336], 99.900th=[ 3523], 99.990th=[ 8979], 00:19:50.363 | 99.999th=[10159] 00:19:50.363 write: IOPS=26.5k, BW=104MiB/s (109MB/s)(1037MiB/10001msec); 0 zone resets 00:19:50.363 slat (usec): min=9, max=2142, avg=28.57, stdev=25.42 00:19:50.363 clat (usec): min=100, max=10440, avg=796.18, stdev=298.37 00:19:50.363 lat (usec): min=129, max=10475, avg=824.75, stdev=300.37 00:19:50.363 clat percentiles (usec): 00:19:50.363 | 50.000th=[ 799], 99.000th=[ 1483], 99.900th=[ 3130], 99.990th=[ 8586], 00:19:50.363 | 99.999th=[10421] 00:19:50.363 bw ( KiB/s): min=96071, max=122767, per=99.90%, avg=106058.95, stdev=1378.15, samples=114 00:19:50.363 iops : min=24017, max=30691, avg=26514.47, stdev=344.53, samples=114 00:19:50.363 lat (usec) : 250=1.83%, 500=14.39%, 750=31.08%, 1000=39.66% 00:19:50.363 lat (msec) : 2=12.77%, 4=0.21%, 10=0.05%, 20=0.01% 00:19:50.363 cpu : usr=60.94%, sys=26.28%, ctx=6268, majf=0, minf=22779 00:19:50.363 IO depths : 1=12.2%, 2=24.8%, 4=50.2%, 8=12.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:50.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.363 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.363 issued rwts: total=261610,265441,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:50.363 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:50.363 00:19:50.363 Run status group 0 (all jobs): 00:19:50.363 READ: bw=102MiB/s (107MB/s), 102MiB/s-102MiB/s (107MB/s-107MB/s), io=1022MiB (1072MB), run=10001-10001msec 00:19:50.363 WRITE: bw=104MiB/s (109MB/s), 104MiB/s-104MiB/s (109MB/s-109MB/s), io=1037MiB (1087MB), run=10001-10001msec 00:19:50.363 ----------------------------------------------------- 00:19:50.363 Suppressions used: 00:19:50.363 count bytes template 00:19:50.363 6 48 /usr/src/fio/parse.c 00:19:50.363 3652 350592 /usr/src/fio/iolog.c 00:19:50.363 1 8 libtcmalloc_minimal.so 00:19:50.363 1 904 libcrypto.so 00:19:50.363 ----------------------------------------------------- 00:19:50.363 00:19:50.363 00:19:50.363 real 0m12.334s 00:19:50.363 user 0m38.437s 00:19:50.363 sys 0m16.094s 00:19:50.363 03:48:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:50.363 03:48:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:50.363 ************************************ 00:19:50.363 END TEST bdev_fio_rw_verify 00:19:50.363 ************************************ 00:19:50.363 03:48:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:19:50.363 03:48:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:19:50.363 03:48:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:50.363 03:48:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:50.363 03:48:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:50.363 03:48:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:19:50.363 03:48:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:19:50.363 03:48:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:19:50.363 03:48:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:19:50.363 03:48:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:50.363 03:48:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:19:50.363 03:48:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:19:50.363 03:48:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:50.363 03:48:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:19:50.363 03:48:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:19:50.363 03:48:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:19:50.363 03:48:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:19:50.363 03:48:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:50.364 03:48:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "dba3d3c0-9d30-430a-9861-2bdc6db15db7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "dba3d3c0-9d30-430a-9861-2bdc6db15db7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "e467e0f9-1356-43e2-977e-9fc121adc82b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "e467e0f9-1356-43e2-977e-9fc121adc82b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "2e0ca2a8-084b-420e-b839-6f5c82bbfa4f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2e0ca2a8-084b-420e-b839-6f5c82bbfa4f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "cd6befb2-7d4c-4d0a-8920-9c82fdfa5b91"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cd6befb2-7d4c-4d0a-8920-9c82fdfa5b91",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "f1c9b7dc-827a-4af6-96fc-a849caa9e5d5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f1c9b7dc-827a-4af6-96fc-a849caa9e5d5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "4c507246-4c8a-480d-8c9b-5421d961df8f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "4c507246-4c8a-480d-8c9b-5421d961df8f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:19:50.364 03:48:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:19:50.364 03:48:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:50.364 03:48:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:19:50.364 /home/vagrant/spdk_repo/spdk 00:19:50.364 03:48:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:19:50.364 03:48:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:19:50.364 00:19:50.364 real 0m12.488s 00:19:50.364 user 0m38.530s 00:19:50.364 sys 0m16.158s 00:19:50.364 03:48:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:50.364 03:48:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:50.364 ************************************ 00:19:50.364 END TEST bdev_fio 00:19:50.364 ************************************ 00:19:50.364 03:48:05 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:19:50.364 03:48:05 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:50.364 03:48:05 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:50.364 03:48:05 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:19:50.364 03:48:05 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:50.364 03:48:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:50.364 ************************************ 00:19:50.364 START TEST bdev_verify 00:19:50.364 ************************************ 00:19:50.364 03:48:05 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:50.622 [2024-07-26 03:48:05.297037] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:19:50.622 [2024-07-26 03:48:05.297229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77586 ] 00:19:50.622 [2024-07-26 03:48:05.470346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:50.882 [2024-07-26 03:48:05.704357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.882 [2024-07-26 03:48:05.704361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.449 Running I/O for 5 seconds... 00:19:56.716 00:19:56.716 Latency(us) 00:19:56.716 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.716 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:56.716 Verification LBA range: start 0x0 length 0xa0000 00:19:56.716 nvme0n1 : 5.03 1653.08 6.46 0.00 0.00 77280.09 15192.44 71493.82 00:19:56.716 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:56.716 Verification LBA range: start 0xa0000 length 0xa0000 00:19:56.716 nvme0n1 : 5.05 1595.56 6.23 0.00 0.00 80071.66 13405.09 73876.95 00:19:56.716 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:56.716 Verification LBA range: start 0x0 length 0xbd0bd 00:19:56.716 nvme1n1 : 5.06 2716.99 10.61 0.00 0.00 46832.30 5242.88 71493.82 00:19:56.716 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:56.716 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:19:56.716 nvme1n1 : 5.07 2700.90 10.55 0.00 0.00 47104.71 5213.09 71970.44 00:19:56.716 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:56.716 Verification LBA range: start 0x0 length 0x80000 00:19:56.716 nvme2n1 : 5.08 1689.79 6.60 0.00 0.00 75164.15 6464.23 77213.32 00:19:56.716 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:56.716 Verification LBA range: start 0x80000 length 0x80000 00:19:56.716 nvme2n1 : 5.07 1614.34 6.31 0.00 0.00 78665.82 8757.99 79119.83 00:19:56.716 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:56.716 Verification LBA range: start 0x0 length 0x80000 00:19:56.716 nvme2n2 : 5.07 1666.85 6.51 0.00 0.00 76020.45 8460.10 71017.19 00:19:56.716 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:56.716 Verification LBA range: start 0x80000 length 0x80000 00:19:56.716 nvme2n2 : 5.06 1593.38 6.22 0.00 0.00 79521.07 19541.64 79119.83 00:19:56.716 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:56.716 Verification LBA range: start 0x0 length 0x80000 00:19:56.716 nvme2n3 : 5.07 1665.85 6.51 0.00 0.00 75917.11 14954.12 66727.56 00:19:56.716 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:56.716 Verification LBA range: start 0x80000 length 0x80000 00:19:56.716 nvme2n3 : 5.08 1612.06 6.30 0.00 0.00 78445.52 2993.80 75783.45 00:19:56.716 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:56.716 Verification LBA range: start 0x0 length 0x20000 00:19:56.716 nvme3n1 : 5.08 1663.52 6.50 0.00 0.00 75869.27 10366.60 74830.20 00:19:56.716 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:56.716 Verification LBA range: start 0x20000 length 0x20000 00:19:56.716 nvme3n1 : 5.08 1611.42 6.29 0.00 0.00 78305.51 7536.64 73400.32 00:19:56.716 =================================================================================================================== 00:19:56.716 Total : 21783.73 85.09 0.00 0.00 69901.62 2993.80 79119.83 00:19:57.654 00:19:57.654 real 0m7.262s 00:19:57.654 user 0m11.406s 00:19:57.654 sys 0m1.666s 00:19:57.654 03:48:12 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:57.654 03:48:12 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:57.655 ************************************ 00:19:57.655 END TEST bdev_verify 00:19:57.655 ************************************ 00:19:57.655 03:48:12 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:19:57.655 03:48:12 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:57.655 03:48:12 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:19:57.655 03:48:12 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:57.655 03:48:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:57.655 ************************************ 00:19:57.655 START TEST bdev_verify_big_io 00:19:57.655 ************************************ 00:19:57.655 03:48:12 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:57.914 [2024-07-26 03:48:12.597596] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:19:57.914 [2024-07-26 03:48:12.597783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77686 ] 00:19:57.914 [2024-07-26 03:48:12.762762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:58.172 [2024-07-26 03:48:12.987799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.172 [2024-07-26 03:48:12.987802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.737 Running I/O for 5 seconds... 00:20:05.300 00:20:05.300 Latency(us) 00:20:05.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.300 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:05.300 Verification LBA range: start 0x0 length 0xa000 00:20:05.300 nvme0n1 : 6.02 114.32 7.15 0.00 0.00 1055190.56 205902.20 1021884.97 00:20:05.301 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:05.301 Verification LBA range: start 0xa000 length 0xa000 00:20:05.301 nvme0n1 : 6.01 106.49 6.66 0.00 0.00 1139572.36 95325.09 2165786.07 00:20:05.301 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:05.301 Verification LBA range: start 0x0 length 0xbd0b 00:20:05.301 nvme1n1 : 6.03 111.35 6.96 0.00 0.00 1076678.95 5838.66 2089525.99 00:20:05.301 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:05.301 Verification LBA range: start 0xbd0b length 0xbd0b 00:20:05.301 nvme1n1 : 5.86 152.95 9.56 0.00 0.00 776980.88 79119.83 804543.77 00:20:05.301 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:05.301 Verification LBA range: start 0x0 length 0x8000 00:20:05.301 nvme2n1 : 6.02 138.10 8.63 0.00 0.00 859761.86 26691.03 1121023.07 00:20:05.301 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:05.301 Verification LBA range: start 0x8000 length 0x8000 00:20:05.301 nvme2n1 : 6.01 138.35 8.65 0.00 0.00 842461.27 132501.88 930372.89 00:20:05.301 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:05.301 Verification LBA range: start 0x0 length 0x8000 00:20:05.301 nvme2n2 : 6.04 90.60 5.66 0.00 0.00 1269114.82 33125.47 2897882.76 00:20:05.301 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:05.301 Verification LBA range: start 0x8000 length 0x8000 00:20:05.301 nvme2n2 : 6.02 101.04 6.31 0.00 0.00 1115478.38 133455.13 1593835.52 00:20:05.301 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:05.301 Verification LBA range: start 0x0 length 0x8000 00:20:05.301 nvme2n3 : 6.03 135.36 8.46 0.00 0.00 823586.01 17754.30 1281169.22 00:20:05.301 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:05.301 Verification LBA range: start 0x8000 length 0x8000 00:20:05.301 nvme2n3 : 6.04 145.61 9.10 0.00 0.00 756646.29 15847.80 1296421.24 00:20:05.301 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:05.301 Verification LBA range: start 0x0 length 0x2000 00:20:05.301 nvme3n1 : 6.03 116.69 7.29 0.00 0.00 924556.31 16324.42 1692973.61 00:20:05.301 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:05.301 Verification LBA range: start 0x2000 length 0x2000 00:20:05.301 nvme3n1 : 6.04 103.36 6.46 0.00 0.00 1031746.97 3112.96 3004646.87 00:20:05.301 =================================================================================================================== 00:20:05.301 Total : 1454.23 90.89 0.00 0.00 949337.40 3112.96 3004646.87 00:20:06.236 00:20:06.236 real 0m8.515s 00:20:06.236 user 0m15.284s 00:20:06.236 sys 0m0.512s 00:20:06.236 03:48:21 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:06.236 03:48:21 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:20:06.236 ************************************ 00:20:06.236 END TEST bdev_verify_big_io 00:20:06.236 ************************************ 00:20:06.236 03:48:21 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:20:06.236 03:48:21 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:06.236 03:48:21 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:20:06.236 03:48:21 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:06.236 03:48:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:06.236 ************************************ 00:20:06.236 START TEST bdev_write_zeroes 00:20:06.236 ************************************ 00:20:06.236 03:48:21 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:06.495 [2024-07-26 03:48:21.165649] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:20:06.495 [2024-07-26 03:48:21.165805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77797 ] 00:20:06.495 [2024-07-26 03:48:21.326606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.754 [2024-07-26 03:48:21.516937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.321 Running I/O for 1 seconds... 00:20:08.256 00:20:08.256 Latency(us) 00:20:08.256 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:08.256 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:08.256 nvme0n1 : 1.01 9770.16 38.16 0.00 0.00 13086.23 6762.12 23354.65 00:20:08.256 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:08.256 nvme1n1 : 1.01 14825.11 57.91 0.00 0.00 8603.13 4825.83 14298.76 00:20:08.256 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:08.257 nvme2n1 : 1.01 9741.81 38.05 0.00 0.00 13034.74 7268.54 22878.02 00:20:08.257 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:08.257 nvme2n2 : 1.01 9726.27 37.99 0.00 0.00 13046.65 7268.54 22639.71 00:20:08.257 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:08.257 nvme2n3 : 1.02 9798.00 38.27 0.00 0.00 12941.85 3842.79 21567.30 00:20:08.257 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:08.257 nvme3n1 : 1.02 9786.08 38.23 0.00 0.00 12947.61 4170.47 20733.21 00:20:08.257 =================================================================================================================== 00:20:08.257 Total : 63647.43 248.62 0.00 0.00 11987.80 3842.79 23354.65 00:20:09.632 00:20:09.632 real 0m3.037s 00:20:09.632 user 0m2.290s 00:20:09.632 sys 0m0.563s 00:20:09.632 03:48:24 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:09.632 03:48:24 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:20:09.632 ************************************ 00:20:09.632 END TEST bdev_write_zeroes 00:20:09.632 ************************************ 00:20:09.632 03:48:24 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:20:09.632 03:48:24 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:09.632 03:48:24 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:20:09.632 03:48:24 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:09.632 03:48:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:09.632 ************************************ 00:20:09.632 START TEST bdev_json_nonenclosed 00:20:09.632 ************************************ 00:20:09.632 03:48:24 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:09.632 [2024-07-26 03:48:24.267061] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:20:09.632 [2024-07-26 03:48:24.267272] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77856 ] 00:20:09.632 [2024-07-26 03:48:24.441232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.891 [2024-07-26 03:48:24.636168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.891 [2024-07-26 03:48:24.636290] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:20:09.891 [2024-07-26 03:48:24.636326] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:09.891 [2024-07-26 03:48:24.636346] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:10.485 00:20:10.485 real 0m0.899s 00:20:10.485 user 0m0.658s 00:20:10.485 sys 0m0.135s 00:20:10.485 03:48:25 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:20:10.485 03:48:25 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:10.485 03:48:25 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:20:10.485 ************************************ 00:20:10.485 END TEST bdev_json_nonenclosed 00:20:10.485 ************************************ 00:20:10.485 03:48:25 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 234 00:20:10.485 03:48:25 blockdev_xnvme -- bdev/blockdev.sh@781 -- # true 00:20:10.485 03:48:25 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:10.485 03:48:25 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:20:10.485 03:48:25 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:10.485 03:48:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:10.485 ************************************ 00:20:10.485 START TEST bdev_json_nonarray 00:20:10.485 ************************************ 00:20:10.485 03:48:25 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:10.485 [2024-07-26 03:48:25.211242] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:20:10.485 [2024-07-26 03:48:25.211443] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77886 ] 00:20:10.485 [2024-07-26 03:48:25.386858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.743 [2024-07-26 03:48:25.622414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.743 [2024-07-26 03:48:25.622585] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:10.743 [2024-07-26 03:48:25.622647] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:10.743 [2024-07-26 03:48:25.622669] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:11.310 00:20:11.310 real 0m0.971s 00:20:11.310 user 0m0.715s 00:20:11.310 sys 0m0.148s 00:20:11.310 03:48:26 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:20:11.310 03:48:26 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:11.310 03:48:26 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:11.310 ************************************ 00:20:11.310 END TEST bdev_json_nonarray 00:20:11.310 ************************************ 00:20:11.310 03:48:26 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 234 00:20:11.310 03:48:26 blockdev_xnvme -- bdev/blockdev.sh@784 -- # true 00:20:11.310 03:48:26 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:20:11.310 03:48:26 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:20:11.310 03:48:26 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:20:11.310 03:48:26 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:20:11.310 03:48:26 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:20:11.310 03:48:26 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:11.310 03:48:26 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:11.310 03:48:26 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:20:11.310 03:48:26 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:20:11.310 03:48:26 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:20:11.310 03:48:26 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:20:11.310 03:48:26 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:11.877 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:24.094 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:24.094 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:20:24.094 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:24.094 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:20:24.094 00:20:24.094 real 1m12.322s 00:20:24.094 user 1m46.559s 00:20:24.094 sys 0m46.358s 00:20:24.094 03:48:37 blockdev_xnvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:24.094 03:48:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:24.094 ************************************ 00:20:24.094 END TEST blockdev_xnvme 00:20:24.094 ************************************ 00:20:24.094 03:48:37 -- common/autotest_common.sh@1142 -- # return 0 00:20:24.094 03:48:37 -- spdk/autotest.sh@251 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:20:24.094 03:48:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:24.095 03:48:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:24.095 03:48:37 -- common/autotest_common.sh@10 -- # set +x 00:20:24.095 ************************************ 00:20:24.095 START TEST ublk 00:20:24.095 ************************************ 00:20:24.095 03:48:37 ublk -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:20:24.095 * Looking for test storage... 00:20:24.095 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:20:24.095 03:48:37 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:20:24.095 03:48:37 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:20:24.095 03:48:37 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:20:24.095 03:48:37 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:20:24.095 03:48:37 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:20:24.095 03:48:37 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:20:24.095 03:48:37 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:20:24.095 03:48:37 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:20:24.095 03:48:37 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:20:24.095 03:48:37 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:20:24.095 03:48:37 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:20:24.095 03:48:37 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:20:24.095 03:48:37 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:20:24.095 03:48:37 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:20:24.095 03:48:37 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:20:24.095 03:48:37 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:20:24.095 03:48:37 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:20:24.095 03:48:37 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:20:24.095 03:48:37 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:20:24.095 03:48:37 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:20:24.095 03:48:37 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:24.095 03:48:37 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:24.095 03:48:37 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:24.095 ************************************ 00:20:24.095 START TEST test_save_ublk_config 00:20:24.095 ************************************ 00:20:24.095 03:48:37 ublk.test_save_ublk_config -- common/autotest_common.sh@1123 -- # test_save_config 00:20:24.095 03:48:37 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:20:24.095 03:48:37 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=78191 00:20:24.095 03:48:37 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:20:24.095 03:48:37 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:20:24.095 03:48:37 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 78191 00:20:24.095 03:48:37 ublk.test_save_ublk_config -- common/autotest_common.sh@829 -- # '[' -z 78191 ']' 00:20:24.095 03:48:37 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.095 03:48:37 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:24.095 03:48:37 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.095 03:48:37 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:24.095 03:48:37 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:24.095 [2024-07-26 03:48:37.772981] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:20:24.095 [2024-07-26 03:48:37.773159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78191 ] 00:20:24.095 [2024-07-26 03:48:37.947524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.095 [2024-07-26 03:48:38.183152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.353 03:48:39 ublk.test_save_ublk_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:24.353 03:48:39 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # return 0 00:20:24.353 03:48:39 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:20:24.353 03:48:39 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:20:24.353 03:48:39 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.353 03:48:39 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:24.353 [2024-07-26 03:48:39.011844] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:24.353 [2024-07-26 03:48:39.012936] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:24.353 malloc0 00:20:24.353 [2024-07-26 03:48:39.084364] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:20:24.353 [2024-07-26 03:48:39.084489] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:20:24.354 [2024-07-26 03:48:39.084506] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:20:24.354 [2024-07-26 03:48:39.084519] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:20:24.354 [2024-07-26 03:48:39.092971] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:24.354 [2024-07-26 03:48:39.093020] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:24.354 [2024-07-26 03:48:39.099846] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:24.354 [2024-07-26 03:48:39.099975] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:20:24.354 [2024-07-26 03:48:39.116848] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:20:24.354 0 00:20:24.354 03:48:39 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.354 03:48:39 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:20:24.354 03:48:39 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.354 03:48:39 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:24.612 03:48:39 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.613 03:48:39 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:20:24.613 "subsystems": [ 00:20:24.613 { 00:20:24.613 "subsystem": "keyring", 00:20:24.613 "config": [] 00:20:24.613 }, 00:20:24.613 { 00:20:24.613 "subsystem": "iobuf", 00:20:24.613 "config": [ 00:20:24.613 { 00:20:24.613 "method": "iobuf_set_options", 00:20:24.613 "params": { 00:20:24.613 "small_pool_count": 8192, 00:20:24.613 "large_pool_count": 1024, 00:20:24.613 "small_bufsize": 8192, 00:20:24.613 "large_bufsize": 135168 00:20:24.613 } 00:20:24.613 } 00:20:24.613 ] 00:20:24.613 }, 00:20:24.613 { 00:20:24.613 "subsystem": "sock", 00:20:24.613 "config": [ 00:20:24.613 { 00:20:24.613 "method": "sock_set_default_impl", 00:20:24.613 "params": { 00:20:24.613 "impl_name": "posix" 00:20:24.613 } 00:20:24.613 }, 00:20:24.613 { 00:20:24.613 "method": "sock_impl_set_options", 00:20:24.613 "params": { 00:20:24.613 "impl_name": "ssl", 00:20:24.613 "recv_buf_size": 4096, 00:20:24.613 "send_buf_size": 4096, 00:20:24.613 "enable_recv_pipe": true, 00:20:24.613 "enable_quickack": false, 00:20:24.613 "enable_placement_id": 0, 00:20:24.613 "enable_zerocopy_send_server": true, 00:20:24.613 "enable_zerocopy_send_client": false, 00:20:24.613 "zerocopy_threshold": 0, 00:20:24.613 "tls_version": 0, 00:20:24.613 "enable_ktls": false 00:20:24.613 } 00:20:24.613 }, 00:20:24.613 { 00:20:24.613 "method": "sock_impl_set_options", 00:20:24.613 "params": { 00:20:24.613 "impl_name": "posix", 00:20:24.613 "recv_buf_size": 2097152, 00:20:24.613 "send_buf_size": 2097152, 00:20:24.613 "enable_recv_pipe": true, 00:20:24.613 "enable_quickack": false, 00:20:24.613 "enable_placement_id": 0, 00:20:24.613 "enable_zerocopy_send_server": true, 00:20:24.613 "enable_zerocopy_send_client": false, 00:20:24.613 "zerocopy_threshold": 0, 00:20:24.613 "tls_version": 0, 00:20:24.613 "enable_ktls": false 00:20:24.613 } 00:20:24.613 } 00:20:24.613 ] 00:20:24.613 }, 00:20:24.613 { 00:20:24.613 "subsystem": "vmd", 00:20:24.613 "config": [] 00:20:24.613 }, 00:20:24.613 { 00:20:24.613 "subsystem": "accel", 00:20:24.613 "config": [ 00:20:24.613 { 00:20:24.613 "method": "accel_set_options", 00:20:24.613 "params": { 00:20:24.613 "small_cache_size": 128, 00:20:24.613 "large_cache_size": 16, 00:20:24.613 "task_count": 2048, 00:20:24.613 "sequence_count": 2048, 00:20:24.613 "buf_count": 2048 00:20:24.613 } 00:20:24.613 } 00:20:24.613 ] 00:20:24.613 }, 00:20:24.613 { 00:20:24.613 "subsystem": "bdev", 00:20:24.613 "config": [ 00:20:24.613 { 00:20:24.613 "method": "bdev_set_options", 00:20:24.613 "params": { 00:20:24.613 "bdev_io_pool_size": 65535, 00:20:24.613 "bdev_io_cache_size": 256, 00:20:24.613 "bdev_auto_examine": true, 00:20:24.613 "iobuf_small_cache_size": 128, 00:20:24.613 "iobuf_large_cache_size": 16 00:20:24.613 } 00:20:24.613 }, 00:20:24.613 { 00:20:24.613 "method": "bdev_raid_set_options", 00:20:24.613 "params": { 00:20:24.613 "process_window_size_kb": 1024, 00:20:24.613 "process_max_bandwidth_mb_sec": 0 00:20:24.613 } 00:20:24.613 }, 00:20:24.613 { 00:20:24.613 "method": "bdev_iscsi_set_options", 00:20:24.613 "params": { 00:20:24.613 "timeout_sec": 30 00:20:24.613 } 00:20:24.613 }, 00:20:24.613 { 00:20:24.613 "method": "bdev_nvme_set_options", 00:20:24.613 "params": { 00:20:24.613 "action_on_timeout": "none", 00:20:24.613 "timeout_us": 0, 00:20:24.613 "timeout_admin_us": 0, 00:20:24.613 "keep_alive_timeout_ms": 10000, 00:20:24.613 "arbitration_burst": 0, 00:20:24.613 "low_priority_weight": 0, 00:20:24.613 "medium_priority_weight": 0, 00:20:24.613 "high_priority_weight": 0, 00:20:24.613 "nvme_adminq_poll_period_us": 10000, 00:20:24.613 "nvme_ioq_poll_period_us": 0, 00:20:24.613 "io_queue_requests": 0, 00:20:24.613 "delay_cmd_submit": true, 00:20:24.613 "transport_retry_count": 4, 00:20:24.613 "bdev_retry_count": 3, 00:20:24.613 "transport_ack_timeout": 0, 00:20:24.613 "ctrlr_loss_timeout_sec": 0, 00:20:24.613 "reconnect_delay_sec": 0, 00:20:24.613 "fast_io_fail_timeout_sec": 0, 00:20:24.613 "disable_auto_failback": false, 00:20:24.613 "generate_uuids": false, 00:20:24.613 "transport_tos": 0, 00:20:24.613 "nvme_error_stat": false, 00:20:24.613 "rdma_srq_size": 0, 00:20:24.613 "io_path_stat": false, 00:20:24.613 "allow_accel_sequence": false, 00:20:24.613 "rdma_max_cq_size": 0, 00:20:24.613 "rdma_cm_event_timeout_ms": 0, 00:20:24.613 "dhchap_digests": [ 00:20:24.613 "sha256", 00:20:24.613 "sha384", 00:20:24.613 "sha512" 00:20:24.613 ], 00:20:24.613 "dhchap_dhgroups": [ 00:20:24.613 "null", 00:20:24.613 "ffdhe2048", 00:20:24.613 "ffdhe3072", 00:20:24.613 "ffdhe4096", 00:20:24.613 "ffdhe6144", 00:20:24.613 "ffdhe8192" 00:20:24.613 ] 00:20:24.613 } 00:20:24.613 }, 00:20:24.613 { 00:20:24.613 "method": "bdev_nvme_set_hotplug", 00:20:24.613 "params": { 00:20:24.613 "period_us": 100000, 00:20:24.613 "enable": false 00:20:24.613 } 00:20:24.613 }, 00:20:24.613 { 00:20:24.613 "method": "bdev_malloc_create", 00:20:24.613 "params": { 00:20:24.613 "name": "malloc0", 00:20:24.613 "num_blocks": 8192, 00:20:24.613 "block_size": 4096, 00:20:24.613 "physical_block_size": 4096, 00:20:24.613 "uuid": "76183212-649c-4f59-b20c-6513482df7ce", 00:20:24.613 "optimal_io_boundary": 0, 00:20:24.613 "md_size": 0, 00:20:24.613 "dif_type": 0, 00:20:24.613 "dif_is_head_of_md": false, 00:20:24.613 "dif_pi_format": 0 00:20:24.613 } 00:20:24.613 }, 00:20:24.613 { 00:20:24.613 "method": "bdev_wait_for_examine" 00:20:24.613 } 00:20:24.613 ] 00:20:24.613 }, 00:20:24.613 { 00:20:24.613 "subsystem": "scsi", 00:20:24.613 "config": null 00:20:24.613 }, 00:20:24.613 { 00:20:24.613 "subsystem": "scheduler", 00:20:24.613 "config": [ 00:20:24.613 { 00:20:24.613 "method": "framework_set_scheduler", 00:20:24.613 "params": { 00:20:24.613 "name": "static" 00:20:24.613 } 00:20:24.613 } 00:20:24.613 ] 00:20:24.613 }, 00:20:24.613 { 00:20:24.613 "subsystem": "vhost_scsi", 00:20:24.613 "config": [] 00:20:24.613 }, 00:20:24.613 { 00:20:24.613 "subsystem": "vhost_blk", 00:20:24.613 "config": [] 00:20:24.613 }, 00:20:24.613 { 00:20:24.613 "subsystem": "ublk", 00:20:24.613 "config": [ 00:20:24.613 { 00:20:24.613 "method": "ublk_create_target", 00:20:24.613 "params": { 00:20:24.613 "cpumask": "1" 00:20:24.613 } 00:20:24.613 }, 00:20:24.613 { 00:20:24.613 "method": "ublk_start_disk", 00:20:24.613 "params": { 00:20:24.613 "bdev_name": "malloc0", 00:20:24.613 "ublk_id": 0, 00:20:24.613 "num_queues": 1, 00:20:24.613 "queue_depth": 128 00:20:24.613 } 00:20:24.613 } 00:20:24.613 ] 00:20:24.613 }, 00:20:24.613 { 00:20:24.613 "subsystem": "nbd", 00:20:24.613 "config": [] 00:20:24.613 }, 00:20:24.613 { 00:20:24.613 "subsystem": "nvmf", 00:20:24.613 "config": [ 00:20:24.613 { 00:20:24.613 "method": "nvmf_set_config", 00:20:24.613 "params": { 00:20:24.613 "discovery_filter": "match_any", 00:20:24.613 "admin_cmd_passthru": { 00:20:24.613 "identify_ctrlr": false 00:20:24.613 } 00:20:24.613 } 00:20:24.613 }, 00:20:24.613 { 00:20:24.613 "method": "nvmf_set_max_subsystems", 00:20:24.613 "params": { 00:20:24.613 "max_subsystems": 1024 00:20:24.613 } 00:20:24.613 }, 00:20:24.613 { 00:20:24.613 "method": "nvmf_set_crdt", 00:20:24.613 "params": { 00:20:24.613 "crdt1": 0, 00:20:24.613 "crdt2": 0, 00:20:24.613 "crdt3": 0 00:20:24.613 } 00:20:24.613 } 00:20:24.613 ] 00:20:24.613 }, 00:20:24.613 { 00:20:24.613 "subsystem": "iscsi", 00:20:24.613 "config": [ 00:20:24.613 { 00:20:24.613 "method": "iscsi_set_options", 00:20:24.613 "params": { 00:20:24.613 "node_base": "iqn.2016-06.io.spdk", 00:20:24.613 "max_sessions": 128, 00:20:24.613 "max_connections_per_session": 2, 00:20:24.613 "max_queue_depth": 64, 00:20:24.613 "default_time2wait": 2, 00:20:24.613 "default_time2retain": 20, 00:20:24.613 "first_burst_length": 8192, 00:20:24.613 "immediate_data": true, 00:20:24.613 "allow_duplicated_isid": false, 00:20:24.613 "error_recovery_level": 0, 00:20:24.613 "nop_timeout": 60, 00:20:24.613 "nop_in_interval": 30, 00:20:24.613 "disable_chap": false, 00:20:24.613 "require_chap": false, 00:20:24.613 "mutual_chap": false, 00:20:24.613 "chap_group": 0, 00:20:24.613 "max_large_datain_per_connection": 64, 00:20:24.613 "max_r2t_per_connection": 4, 00:20:24.613 "pdu_pool_size": 36864, 00:20:24.614 "immediate_data_pool_size": 16384, 00:20:24.614 "data_out_pool_size": 2048 00:20:24.614 } 00:20:24.614 } 00:20:24.614 ] 00:20:24.614 } 00:20:24.614 ] 00:20:24.614 }' 00:20:24.614 03:48:39 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 78191 00:20:24.614 03:48:39 ublk.test_save_ublk_config -- common/autotest_common.sh@948 -- # '[' -z 78191 ']' 00:20:24.614 03:48:39 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # kill -0 78191 00:20:24.614 03:48:39 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # uname 00:20:24.614 03:48:39 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:24.614 03:48:39 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78191 00:20:24.614 03:48:39 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:24.614 03:48:39 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:24.614 killing process with pid 78191 00:20:24.614 03:48:39 ublk.test_save_ublk_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78191' 00:20:24.614 03:48:39 ublk.test_save_ublk_config -- common/autotest_common.sh@967 -- # kill 78191 00:20:24.614 03:48:39 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # wait 78191 00:20:25.988 [2024-07-26 03:48:40.724702] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:20:25.988 [2024-07-26 03:48:40.754923] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:25.988 [2024-07-26 03:48:40.755121] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:20:25.988 [2024-07-26 03:48:40.761847] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:25.988 [2024-07-26 03:48:40.761928] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:20:25.988 [2024-07-26 03:48:40.761951] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:20:25.988 [2024-07-26 03:48:40.761987] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:20:25.988 [2024-07-26 03:48:40.762189] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:20:27.361 03:48:41 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=78246 00:20:27.361 03:48:41 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 78246 00:20:27.361 03:48:41 ublk.test_save_ublk_config -- common/autotest_common.sh@829 -- # '[' -z 78246 ']' 00:20:27.361 03:48:41 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:27.361 03:48:41 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:27.361 03:48:41 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:27.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:27.361 03:48:41 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:27.361 03:48:41 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:27.361 03:48:41 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:20:27.361 03:48:41 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:20:27.361 "subsystems": [ 00:20:27.361 { 00:20:27.361 "subsystem": "keyring", 00:20:27.361 "config": [] 00:20:27.361 }, 00:20:27.361 { 00:20:27.361 "subsystem": "iobuf", 00:20:27.361 "config": [ 00:20:27.361 { 00:20:27.361 "method": "iobuf_set_options", 00:20:27.362 "params": { 00:20:27.362 "small_pool_count": 8192, 00:20:27.362 "large_pool_count": 1024, 00:20:27.362 "small_bufsize": 8192, 00:20:27.362 "large_bufsize": 135168 00:20:27.362 } 00:20:27.362 } 00:20:27.362 ] 00:20:27.362 }, 00:20:27.362 { 00:20:27.362 "subsystem": "sock", 00:20:27.362 "config": [ 00:20:27.362 { 00:20:27.362 "method": "sock_set_default_impl", 00:20:27.362 "params": { 00:20:27.362 "impl_name": "posix" 00:20:27.362 } 00:20:27.362 }, 00:20:27.362 { 00:20:27.362 "method": "sock_impl_set_options", 00:20:27.362 "params": { 00:20:27.362 "impl_name": "ssl", 00:20:27.362 "recv_buf_size": 4096, 00:20:27.362 "send_buf_size": 4096, 00:20:27.362 "enable_recv_pipe": true, 00:20:27.362 "enable_quickack": false, 00:20:27.362 "enable_placement_id": 0, 00:20:27.362 "enable_zerocopy_send_server": true, 00:20:27.362 "enable_zerocopy_send_client": false, 00:20:27.362 "zerocopy_threshold": 0, 00:20:27.362 "tls_version": 0, 00:20:27.362 "enable_ktls": false 00:20:27.362 } 00:20:27.362 }, 00:20:27.362 { 00:20:27.362 "method": "sock_impl_set_options", 00:20:27.362 "params": { 00:20:27.362 "impl_name": "posix", 00:20:27.362 "recv_buf_size": 2097152, 00:20:27.362 "send_buf_size": 2097152, 00:20:27.362 "enable_recv_pipe": true, 00:20:27.362 "enable_quickack": false, 00:20:27.362 "enable_placement_id": 0, 00:20:27.362 "enable_zerocopy_send_server": true, 00:20:27.362 "enable_zerocopy_send_client": false, 00:20:27.362 "zerocopy_threshold": 0, 00:20:27.362 "tls_version": 0, 00:20:27.362 "enable_ktls": false 00:20:27.362 } 00:20:27.362 } 00:20:27.362 ] 00:20:27.362 }, 00:20:27.362 { 00:20:27.362 "subsystem": "vmd", 00:20:27.362 "config": [] 00:20:27.362 }, 00:20:27.362 { 00:20:27.362 "subsystem": "accel", 00:20:27.362 "config": [ 00:20:27.362 { 00:20:27.362 "method": "accel_set_options", 00:20:27.362 "params": { 00:20:27.362 "small_cache_size": 128, 00:20:27.362 "large_cache_size": 16, 00:20:27.362 "task_count": 2048, 00:20:27.362 "sequence_count": 2048, 00:20:27.362 "buf_count": 2048 00:20:27.362 } 00:20:27.362 } 00:20:27.362 ] 00:20:27.362 }, 00:20:27.362 { 00:20:27.362 "subsystem": "bdev", 00:20:27.362 "config": [ 00:20:27.362 { 00:20:27.362 "method": "bdev_set_options", 00:20:27.362 "params": { 00:20:27.362 "bdev_io_pool_size": 65535, 00:20:27.362 "bdev_io_cache_size": 256, 00:20:27.362 "bdev_auto_examine": true, 00:20:27.362 "iobuf_small_cache_size": 128, 00:20:27.362 "iobuf_large_cache_size": 16 00:20:27.362 } 00:20:27.362 }, 00:20:27.362 { 00:20:27.362 "method": "bdev_raid_set_options", 00:20:27.362 "params": { 00:20:27.362 "process_window_size_kb": 1024, 00:20:27.362 "process_max_bandwidth_mb_sec": 0 00:20:27.362 } 00:20:27.362 }, 00:20:27.362 { 00:20:27.362 "method": "bdev_iscsi_set_options", 00:20:27.362 "params": { 00:20:27.362 "timeout_sec": 30 00:20:27.362 } 00:20:27.362 }, 00:20:27.362 { 00:20:27.362 "method": "bdev_nvme_set_options", 00:20:27.362 "params": { 00:20:27.362 "action_on_timeout": "none", 00:20:27.362 "timeout_us": 0, 00:20:27.362 "timeout_admin_us": 0, 00:20:27.362 "keep_alive_timeout_ms": 10000, 00:20:27.362 "arbitration_burst": 0, 00:20:27.362 "low_priority_weight": 0, 00:20:27.362 "medium_priority_weight": 0, 00:20:27.362 "high_priority_weight": 0, 00:20:27.362 "nvme_adminq_poll_period_us": 10000, 00:20:27.362 "nvme_ioq_poll_period_us": 0, 00:20:27.362 "io_queue_requests": 0, 00:20:27.362 "delay_cmd_submit": true, 00:20:27.362 "transport_retry_count": 4, 00:20:27.362 "bdev_retry_count": 3, 00:20:27.362 "transport_ack_timeout": 0, 00:20:27.362 "ctrlr_loss_timeout_sec": 0, 00:20:27.362 "reconnect_delay_sec": 0, 00:20:27.362 "fast_io_fail_timeout_sec": 0, 00:20:27.362 "disable_auto_failback": false, 00:20:27.362 "generate_uuids": false, 00:20:27.362 "transport_tos": 0, 00:20:27.362 "nvme_error_stat": false, 00:20:27.362 "rdma_srq_size": 0, 00:20:27.362 "io_path_stat": false, 00:20:27.362 "allow_accel_sequence": false, 00:20:27.362 "rdma_max_cq_size": 0, 00:20:27.362 "rdma_cm_event_timeout_ms": 0, 00:20:27.362 "dhchap_digests": [ 00:20:27.362 "sha256", 00:20:27.362 "sha384", 00:20:27.362 "sha512" 00:20:27.362 ], 00:20:27.362 "dhchap_dhgroups": [ 00:20:27.362 "null", 00:20:27.362 "ffdhe2048", 00:20:27.362 "ffdhe3072", 00:20:27.362 "ffdhe4096", 00:20:27.362 "ffdhe6144", 00:20:27.362 "ffdhe8192" 00:20:27.362 ] 00:20:27.362 } 00:20:27.362 }, 00:20:27.362 { 00:20:27.362 "method": "bdev_nvme_set_hotplug", 00:20:27.362 "params": { 00:20:27.362 "period_us": 100000, 00:20:27.362 "enable": false 00:20:27.362 } 00:20:27.362 }, 00:20:27.362 { 00:20:27.362 "method": "bdev_malloc_create", 00:20:27.362 "params": { 00:20:27.362 "name": "malloc0", 00:20:27.362 "num_blocks": 8192, 00:20:27.362 "block_size": 4096, 00:20:27.362 "physical_block_size": 4096, 00:20:27.362 "uuid": "76183212-649c-4f59-b20c-6513482df7ce", 00:20:27.362 "optimal_io_boundary": 0, 00:20:27.362 "md_size": 0, 00:20:27.362 "dif_type": 0, 00:20:27.362 "dif_is_head_of_md": false, 00:20:27.362 "dif_pi_format": 0 00:20:27.362 } 00:20:27.362 }, 00:20:27.362 { 00:20:27.362 "method": "bdev_wait_for_examine" 00:20:27.362 } 00:20:27.362 ] 00:20:27.363 }, 00:20:27.363 { 00:20:27.363 "subsystem": "scsi", 00:20:27.363 "config": null 00:20:27.363 }, 00:20:27.363 { 00:20:27.363 "subsystem": "scheduler", 00:20:27.363 "config": [ 00:20:27.363 { 00:20:27.363 "method": "framework_set_scheduler", 00:20:27.363 "params": { 00:20:27.363 "name": "static" 00:20:27.363 } 00:20:27.363 } 00:20:27.363 ] 00:20:27.363 }, 00:20:27.363 { 00:20:27.363 "subsystem": "vhost_scsi", 00:20:27.363 "config": [] 00:20:27.363 }, 00:20:27.363 { 00:20:27.363 "subsystem": "vhost_blk", 00:20:27.363 "config": [] 00:20:27.363 }, 00:20:27.363 { 00:20:27.363 "subsystem": "ublk", 00:20:27.363 "config": [ 00:20:27.363 { 00:20:27.363 "method": "ublk_create_target", 00:20:27.363 "params": { 00:20:27.363 "cpumask": "1" 00:20:27.363 } 00:20:27.363 }, 00:20:27.363 { 00:20:27.363 "method": "ublk_start_disk", 00:20:27.363 "params": { 00:20:27.363 "bdev_name": "malloc0", 00:20:27.363 "ublk_id": 0, 00:20:27.363 "num_queues": 1, 00:20:27.363 "queue_depth": 128 00:20:27.363 } 00:20:27.363 } 00:20:27.363 ] 00:20:27.363 }, 00:20:27.363 { 00:20:27.363 "subsystem": "nbd", 00:20:27.363 "config": [] 00:20:27.363 }, 00:20:27.363 { 00:20:27.363 "subsystem": "nvmf", 00:20:27.363 "config": [ 00:20:27.363 { 00:20:27.363 "method": "nvmf_set_config", 00:20:27.363 "params": { 00:20:27.363 "discovery_filter": "match_any", 00:20:27.363 "admin_cmd_passthru": { 00:20:27.363 "identify_ctrlr": false 00:20:27.363 } 00:20:27.363 } 00:20:27.363 }, 00:20:27.363 { 00:20:27.363 "method": "nvmf_set_max_subsystems", 00:20:27.363 "params": { 00:20:27.363 "max_subsystems": 1024 00:20:27.363 } 00:20:27.363 }, 00:20:27.363 { 00:20:27.363 "method": "nvmf_set_crdt", 00:20:27.363 "params": { 00:20:27.363 "crdt1": 0, 00:20:27.363 "crdt2": 0, 00:20:27.363 "crdt3": 0 00:20:27.363 } 00:20:27.363 } 00:20:27.363 ] 00:20:27.363 }, 00:20:27.363 { 00:20:27.363 "subsystem": "iscsi", 00:20:27.363 "config": [ 00:20:27.363 { 00:20:27.363 "method": "iscsi_set_options", 00:20:27.363 "params": { 00:20:27.363 "node_base": "iqn.2016-06.io.spdk", 00:20:27.363 "max_sessions": 128, 00:20:27.363 "max_connections_per_session": 2, 00:20:27.363 "max_queue_depth": 64, 00:20:27.363 "default_time2wait": 2, 00:20:27.363 "default_time2retain": 20, 00:20:27.363 "first_burst_length": 8192, 00:20:27.363 "immediate_data": true, 00:20:27.363 "allow_duplicated_isid": false, 00:20:27.363 "error_recovery_level": 0, 00:20:27.363 "nop_timeout": 60, 00:20:27.363 "nop_in_interval": 30, 00:20:27.363 "disable_chap": false, 00:20:27.363 "require_chap": false, 00:20:27.363 "mutual_chap": false, 00:20:27.363 "chap_group": 0, 00:20:27.363 "max_large_datain_per_connection": 64, 00:20:27.363 "max_r2t_per_connection": 4, 00:20:27.363 "pdu_pool_size": 36864, 00:20:27.363 "immediate_data_pool_size": 16384, 00:20:27.363 "data_out_pool_size": 2048 00:20:27.363 } 00:20:27.363 } 00:20:27.363 ] 00:20:27.363 } 00:20:27.363 ] 00:20:27.363 }' 00:20:27.363 [2024-07-26 03:48:42.110272] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:20:27.363 [2024-07-26 03:48:42.110442] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78246 ] 00:20:27.621 [2024-07-26 03:48:42.278208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.621 [2024-07-26 03:48:42.479017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.554 [2024-07-26 03:48:43.334841] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:28.554 [2024-07-26 03:48:43.335910] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:28.554 [2024-07-26 03:48:43.342976] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:20:28.554 [2024-07-26 03:48:43.343071] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:20:28.554 [2024-07-26 03:48:43.343088] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:20:28.554 [2024-07-26 03:48:43.343096] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:20:28.554 [2024-07-26 03:48:43.351911] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:28.554 [2024-07-26 03:48:43.351937] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:28.554 [2024-07-26 03:48:43.358856] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:28.554 [2024-07-26 03:48:43.358972] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:20:28.554 [2024-07-26 03:48:43.375850] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:20:28.554 03:48:43 ublk.test_save_ublk_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:28.554 03:48:43 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # return 0 00:20:28.554 03:48:43 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:20:28.554 03:48:43 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:20:28.554 03:48:43 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.554 03:48:43 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:28.554 03:48:43 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.812 03:48:43 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:20:28.812 03:48:43 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:20:28.812 03:48:43 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 78246 00:20:28.812 03:48:43 ublk.test_save_ublk_config -- common/autotest_common.sh@948 -- # '[' -z 78246 ']' 00:20:28.812 03:48:43 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # kill -0 78246 00:20:28.812 03:48:43 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # uname 00:20:28.812 03:48:43 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:28.812 03:48:43 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78246 00:20:28.812 killing process with pid 78246 00:20:28.812 03:48:43 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:28.812 03:48:43 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:28.812 03:48:43 ublk.test_save_ublk_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78246' 00:20:28.812 03:48:43 ublk.test_save_ublk_config -- common/autotest_common.sh@967 -- # kill 78246 00:20:28.812 03:48:43 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # wait 78246 00:20:30.214 [2024-07-26 03:48:44.879017] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:20:30.214 [2024-07-26 03:48:44.913860] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:30.214 [2024-07-26 03:48:44.914116] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:20:30.214 [2024-07-26 03:48:44.922873] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:30.214 [2024-07-26 03:48:44.922942] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:20:30.214 [2024-07-26 03:48:44.922956] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:20:30.214 [2024-07-26 03:48:44.922993] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:20:30.214 [2024-07-26 03:48:44.925014] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:20:31.586 03:48:46 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:20:31.586 00:20:31.586 real 0m8.521s 00:20:31.586 user 0m7.487s 00:20:31.586 sys 0m1.940s 00:20:31.586 03:48:46 ublk.test_save_ublk_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:31.586 ************************************ 00:20:31.586 END TEST test_save_ublk_config 00:20:31.586 03:48:46 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:31.586 ************************************ 00:20:31.586 03:48:46 ublk -- common/autotest_common.sh@1142 -- # return 0 00:20:31.586 03:48:46 ublk -- ublk/ublk.sh@139 -- # spdk_pid=78325 00:20:31.586 03:48:46 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:20:31.586 03:48:46 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:31.586 03:48:46 ublk -- ublk/ublk.sh@141 -- # waitforlisten 78325 00:20:31.586 03:48:46 ublk -- common/autotest_common.sh@829 -- # '[' -z 78325 ']' 00:20:31.586 03:48:46 ublk -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:31.586 03:48:46 ublk -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:31.586 03:48:46 ublk -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:31.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:31.587 03:48:46 ublk -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:31.587 03:48:46 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:31.587 [2024-07-26 03:48:46.321917] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:20:31.587 [2024-07-26 03:48:46.322070] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78325 ] 00:20:31.587 [2024-07-26 03:48:46.485111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:31.987 [2024-07-26 03:48:46.678091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.987 [2024-07-26 03:48:46.678093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:32.552 03:48:47 ublk -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:32.552 03:48:47 ublk -- common/autotest_common.sh@862 -- # return 0 00:20:32.552 03:48:47 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:20:32.552 03:48:47 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:32.552 03:48:47 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:32.552 03:48:47 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:32.552 ************************************ 00:20:32.552 START TEST test_create_ublk 00:20:32.552 ************************************ 00:20:32.552 03:48:47 ublk.test_create_ublk -- common/autotest_common.sh@1123 -- # test_create_ublk 00:20:32.552 03:48:47 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:20:32.552 03:48:47 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.552 03:48:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:32.552 [2024-07-26 03:48:47.402845] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:32.552 [2024-07-26 03:48:47.405234] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:32.552 03:48:47 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.552 03:48:47 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:20:32.552 03:48:47 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:20:32.552 03:48:47 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.552 03:48:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:32.809 03:48:47 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.809 03:48:47 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:20:32.809 03:48:47 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:20:32.809 03:48:47 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.809 03:48:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:32.809 [2024-07-26 03:48:47.665044] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:20:32.809 [2024-07-26 03:48:47.665528] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:20:32.809 [2024-07-26 03:48:47.665549] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:20:32.809 [2024-07-26 03:48:47.665563] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:20:32.809 [2024-07-26 03:48:47.672910] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:32.809 [2024-07-26 03:48:47.672962] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:32.809 [2024-07-26 03:48:47.680888] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:32.809 [2024-07-26 03:48:47.693199] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:20:32.809 [2024-07-26 03:48:47.709894] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:20:33.067 03:48:47 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.067 03:48:47 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:20:33.067 03:48:47 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:20:33.067 03:48:47 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:20:33.067 03:48:47 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.067 03:48:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:33.067 03:48:47 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.067 03:48:47 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:20:33.067 { 00:20:33.067 "ublk_device": "/dev/ublkb0", 00:20:33.067 "id": 0, 00:20:33.067 "queue_depth": 512, 00:20:33.067 "num_queues": 4, 00:20:33.067 "bdev_name": "Malloc0" 00:20:33.067 } 00:20:33.067 ]' 00:20:33.067 03:48:47 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:20:33.067 03:48:47 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:20:33.067 03:48:47 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:20:33.067 03:48:47 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:20:33.067 03:48:47 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:20:33.067 03:48:47 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:20:33.067 03:48:47 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:20:33.067 03:48:47 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:20:33.067 03:48:47 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:20:33.325 03:48:47 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:20:33.325 03:48:47 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:20:33.325 03:48:47 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:20:33.325 03:48:47 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:20:33.325 03:48:47 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:20:33.325 03:48:47 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:20:33.325 03:48:47 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:20:33.325 03:48:47 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:20:33.325 03:48:47 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:20:33.325 03:48:47 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:20:33.325 03:48:47 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:20:33.325 03:48:47 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:20:33.325 03:48:47 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:20:33.325 fio: verification read phase will never start because write phase uses all of runtime 00:20:33.325 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:20:33.325 fio-3.35 00:20:33.325 Starting 1 process 00:20:43.305 00:20:43.305 fio_test: (groupid=0, jobs=1): err= 0: pid=78374: Fri Jul 26 03:48:58 2024 00:20:43.305 write: IOPS=10.3k, BW=40.3MiB/s (42.3MB/s)(403MiB/10001msec); 0 zone resets 00:20:43.305 clat (usec): min=63, max=8037, avg=95.48, stdev=170.63 00:20:43.305 lat (usec): min=63, max=8054, avg=96.22, stdev=170.65 00:20:43.305 clat percentiles (usec): 00:20:43.305 | 1.00th=[ 77], 5.00th=[ 79], 10.00th=[ 80], 20.00th=[ 81], 00:20:43.306 | 30.00th=[ 81], 40.00th=[ 82], 50.00th=[ 83], 60.00th=[ 84], 00:20:43.306 | 70.00th=[ 86], 80.00th=[ 91], 90.00th=[ 96], 95.00th=[ 103], 00:20:43.306 | 99.00th=[ 121], 99.50th=[ 180], 99.90th=[ 3261], 99.95th=[ 3654], 00:20:43.306 | 99.99th=[ 4146] 00:20:43.306 bw ( KiB/s): min=17928, max=43184, per=99.95%, avg=41245.05, stdev=5665.87, samples=19 00:20:43.306 iops : min= 4482, max=10796, avg=10311.26, stdev=1416.47, samples=19 00:20:43.306 lat (usec) : 100=93.69%, 250=5.84%, 500=0.03%, 750=0.03%, 1000=0.03% 00:20:43.306 lat (msec) : 2=0.12%, 4=0.25%, 10=0.02% 00:20:43.306 cpu : usr=2.93%, sys=7.63%, ctx=103175, majf=0, minf=797 00:20:43.306 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:43.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.306 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.306 issued rwts: total=0,103171,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.306 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:43.306 00:20:43.306 Run status group 0 (all jobs): 00:20:43.306 WRITE: bw=40.3MiB/s (42.3MB/s), 40.3MiB/s-40.3MiB/s (42.3MB/s-42.3MB/s), io=403MiB (423MB), run=10001-10001msec 00:20:43.306 00:20:43.306 Disk stats (read/write): 00:20:43.306 ublkb0: ios=0/102082, merge=0/0, ticks=0/8890, in_queue=8890, util=99.11% 00:20:43.564 03:48:58 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:20:43.564 03:48:58 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.564 03:48:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:43.564 [2024-07-26 03:48:58.219202] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:20:43.564 [2024-07-26 03:48:58.260320] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:43.564 [2024-07-26 03:48:58.266176] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:20:43.564 [2024-07-26 03:48:58.275289] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:43.564 [2024-07-26 03:48:58.275691] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:20:43.564 [2024-07-26 03:48:58.275716] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:20:43.564 03:48:58 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.564 03:48:58 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:20:43.564 03:48:58 ublk.test_create_ublk -- common/autotest_common.sh@648 -- # local es=0 00:20:43.565 03:48:58 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:20:43.565 03:48:58 ublk.test_create_ublk -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:43.565 03:48:58 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:43.565 03:48:58 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:43.565 03:48:58 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:43.565 03:48:58 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # rpc_cmd ublk_stop_disk 0 00:20:43.565 03:48:58 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.565 03:48:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:43.565 [2024-07-26 03:48:58.287982] ublk.c:1053:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:20:43.565 request: 00:20:43.565 { 00:20:43.565 "ublk_id": 0, 00:20:43.565 "method": "ublk_stop_disk", 00:20:43.565 "req_id": 1 00:20:43.565 } 00:20:43.565 Got JSON-RPC error response 00:20:43.565 response: 00:20:43.565 { 00:20:43.565 "code": -19, 00:20:43.565 "message": "No such device" 00:20:43.565 } 00:20:43.565 03:48:58 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:43.565 03:48:58 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # es=1 00:20:43.565 03:48:58 ublk.test_create_ublk -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:43.565 03:48:58 ublk.test_create_ublk -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:43.565 03:48:58 ublk.test_create_ublk -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:43.565 03:48:58 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:20:43.565 03:48:58 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.565 03:48:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:43.565 [2024-07-26 03:48:58.303948] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:20:43.565 [2024-07-26 03:48:58.311871] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:20:43.565 [2024-07-26 03:48:58.311917] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:20:43.565 03:48:58 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.565 03:48:58 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:43.565 03:48:58 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.565 03:48:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:43.823 03:48:58 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.823 03:48:58 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:20:43.823 03:48:58 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:20:43.823 03:48:58 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.823 03:48:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:43.823 03:48:58 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.823 03:48:58 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:20:43.823 03:48:58 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:20:43.823 03:48:58 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:20:43.823 03:48:58 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:20:43.824 03:48:58 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.824 03:48:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:43.824 03:48:58 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.824 03:48:58 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:20:43.824 03:48:58 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:20:44.082 03:48:58 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:20:44.082 00:20:44.082 real 0m11.354s 00:20:44.082 user 0m0.733s 00:20:44.082 sys 0m0.856s 00:20:44.082 03:48:58 ublk.test_create_ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:44.082 ************************************ 00:20:44.082 END TEST test_create_ublk 00:20:44.082 ************************************ 00:20:44.082 03:48:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:44.082 03:48:58 ublk -- common/autotest_common.sh@1142 -- # return 0 00:20:44.082 03:48:58 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:20:44.082 03:48:58 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:44.082 03:48:58 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:44.082 03:48:58 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:44.082 ************************************ 00:20:44.082 START TEST test_create_multi_ublk 00:20:44.082 ************************************ 00:20:44.082 03:48:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@1123 -- # test_create_multi_ublk 00:20:44.082 03:48:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:20:44.082 03:48:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.082 03:48:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:44.082 [2024-07-26 03:48:58.803846] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:44.082 [2024-07-26 03:48:58.806129] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:44.082 03:48:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.082 03:48:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:20:44.082 03:48:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:20:44.082 03:48:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:44.082 03:48:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:20:44.082 03:48:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.082 03:48:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:44.340 03:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.340 03:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:20:44.340 03:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:20:44.340 03:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.340 03:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:44.340 [2024-07-26 03:48:59.048077] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:20:44.340 [2024-07-26 03:48:59.048596] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:20:44.340 [2024-07-26 03:48:59.048623] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:20:44.340 [2024-07-26 03:48:59.048634] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:20:44.340 [2024-07-26 03:48:59.063852] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:44.340 [2024-07-26 03:48:59.063884] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:44.340 [2024-07-26 03:48:59.077844] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:44.340 [2024-07-26 03:48:59.078597] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:20:44.340 [2024-07-26 03:48:59.090563] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:20:44.340 03:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.340 03:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:20:44.340 03:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:44.340 03:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:20:44.340 03:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.340 03:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:44.599 03:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.599 03:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:20:44.599 03:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:20:44.599 03:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.599 03:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:44.599 [2024-07-26 03:48:59.356015] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:20:44.599 [2024-07-26 03:48:59.356500] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:20:44.599 [2024-07-26 03:48:59.356525] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:20:44.599 [2024-07-26 03:48:59.356539] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:20:44.599 [2024-07-26 03:48:59.363879] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:44.599 [2024-07-26 03:48:59.363915] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:44.599 [2024-07-26 03:48:59.371860] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:44.599 [2024-07-26 03:48:59.372595] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:20:44.599 [2024-07-26 03:48:59.380950] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:20:44.599 03:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.599 03:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:20:44.599 03:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:44.599 03:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:20:44.599 03:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.599 03:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:44.857 03:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.857 03:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:20:44.857 03:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:20:44.857 03:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.857 03:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:44.857 [2024-07-26 03:48:59.636020] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:20:44.857 [2024-07-26 03:48:59.636488] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:20:44.857 [2024-07-26 03:48:59.636518] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:20:44.857 [2024-07-26 03:48:59.636529] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:20:44.857 [2024-07-26 03:48:59.643871] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:44.857 [2024-07-26 03:48:59.643901] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:44.857 [2024-07-26 03:48:59.651873] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:44.857 [2024-07-26 03:48:59.652600] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:20:44.858 [2024-07-26 03:48:59.664852] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:20:44.858 03:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.858 03:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:20:44.858 03:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:44.858 03:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:20:44.858 03:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.858 03:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:45.116 03:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.116 03:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:20:45.116 03:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:20:45.116 03:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.116 03:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:45.116 [2024-07-26 03:48:59.916996] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:20:45.116 [2024-07-26 03:48:59.917498] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:20:45.116 [2024-07-26 03:48:59.917522] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:20:45.116 [2024-07-26 03:48:59.917536] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:20:45.116 [2024-07-26 03:48:59.924874] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:45.116 [2024-07-26 03:48:59.924910] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:45.116 [2024-07-26 03:48:59.932864] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:45.116 [2024-07-26 03:48:59.933595] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:20:45.116 [2024-07-26 03:48:59.937494] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:20:45.116 03:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.116 03:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:20:45.116 03:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:20:45.116 03:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.116 03:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:45.116 03:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.116 03:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:20:45.116 { 00:20:45.116 "ublk_device": "/dev/ublkb0", 00:20:45.116 "id": 0, 00:20:45.116 "queue_depth": 512, 00:20:45.116 "num_queues": 4, 00:20:45.116 "bdev_name": "Malloc0" 00:20:45.116 }, 00:20:45.116 { 00:20:45.116 "ublk_device": "/dev/ublkb1", 00:20:45.116 "id": 1, 00:20:45.116 "queue_depth": 512, 00:20:45.116 "num_queues": 4, 00:20:45.116 "bdev_name": "Malloc1" 00:20:45.116 }, 00:20:45.116 { 00:20:45.116 "ublk_device": "/dev/ublkb2", 00:20:45.116 "id": 2, 00:20:45.116 "queue_depth": 512, 00:20:45.116 "num_queues": 4, 00:20:45.116 "bdev_name": "Malloc2" 00:20:45.116 }, 00:20:45.116 { 00:20:45.116 "ublk_device": "/dev/ublkb3", 00:20:45.116 "id": 3, 00:20:45.116 "queue_depth": 512, 00:20:45.116 "num_queues": 4, 00:20:45.116 "bdev_name": "Malloc3" 00:20:45.116 } 00:20:45.116 ]' 00:20:45.116 03:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:20:45.116 03:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:45.116 03:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:20:45.116 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:20:45.374 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:20:45.374 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:20:45.374 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:20:45.374 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:20:45.374 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:20:45.374 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:20:45.374 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:20:45.374 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:20:45.374 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:45.374 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:20:45.374 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:20:45.374 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:20:45.632 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:20:45.632 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:20:45.632 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:20:45.632 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:20:45.632 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:20:45.632 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:20:45.632 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:20:45.632 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:45.632 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:20:45.632 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:20:45.632 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:20:45.891 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:20:45.891 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:20:45.891 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:20:45.891 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:20:45.891 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:20:45.891 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:20:45.891 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:20:45.891 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:45.891 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:20:45.891 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:20:45.891 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:20:46.149 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:20:46.149 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:20:46.149 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:20:46.149 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:20:46.149 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:20:46.149 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:20:46.149 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:20:46.149 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:20:46.149 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:20:46.149 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:46.149 03:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:20:46.149 03:49:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.149 03:49:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:46.149 [2024-07-26 03:49:00.990094] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:20:46.149 [2024-07-26 03:49:01.024404] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:46.149 [2024-07-26 03:49:01.026036] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:20:46.149 [2024-07-26 03:49:01.031886] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:46.149 [2024-07-26 03:49:01.032239] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:20:46.149 [2024-07-26 03:49:01.032260] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:20:46.149 03:49:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.149 03:49:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:46.149 03:49:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:20:46.149 03:49:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.149 03:49:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:46.149 [2024-07-26 03:49:01.047954] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:20:46.408 [2024-07-26 03:49:01.079865] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:46.408 [2024-07-26 03:49:01.081253] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:20:46.408 [2024-07-26 03:49:01.091913] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:46.408 [2024-07-26 03:49:01.092252] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:20:46.408 [2024-07-26 03:49:01.092271] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:20:46.408 03:49:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.408 03:49:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:46.408 03:49:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:20:46.408 03:49:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.408 03:49:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:46.408 [2024-07-26 03:49:01.105981] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:20:46.408 [2024-07-26 03:49:01.144903] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:46.408 [2024-07-26 03:49:01.146197] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:20:46.408 [2024-07-26 03:49:01.153845] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:46.408 [2024-07-26 03:49:01.154237] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:20:46.408 [2024-07-26 03:49:01.154260] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:20:46.408 03:49:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.408 03:49:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:46.408 03:49:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:20:46.408 03:49:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.408 03:49:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:46.408 [2024-07-26 03:49:01.162087] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:20:46.408 [2024-07-26 03:49:01.201894] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:46.408 [2024-07-26 03:49:01.206165] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:20:46.408 [2024-07-26 03:49:01.212849] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:46.408 [2024-07-26 03:49:01.213220] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:20:46.408 [2024-07-26 03:49:01.213242] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:20:46.408 03:49:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.408 03:49:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:20:46.666 [2024-07-26 03:49:01.471959] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:20:46.666 [2024-07-26 03:49:01.479844] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:20:46.666 [2024-07-26 03:49:01.479915] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:20:46.666 03:49:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:20:46.666 03:49:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:46.666 03:49:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:46.666 03:49:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.666 03:49:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:46.924 03:49:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.924 03:49:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:46.924 03:49:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:46.924 03:49:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.924 03:49:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:47.491 03:49:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.491 03:49:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:47.491 03:49:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:20:47.491 03:49:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.491 03:49:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:47.762 03:49:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.762 03:49:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:47.762 03:49:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:20:47.762 03:49:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.762 03:49:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:48.032 03:49:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.032 03:49:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:20:48.032 03:49:02 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:20:48.032 03:49:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.032 03:49:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:48.032 03:49:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.032 03:49:02 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:20:48.032 03:49:02 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:20:48.032 03:49:02 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:20:48.032 03:49:02 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:20:48.032 03:49:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.032 03:49:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:48.032 03:49:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.032 03:49:02 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:20:48.032 03:49:02 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:20:48.032 03:49:02 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:20:48.032 00:20:48.032 real 0m4.069s 00:20:48.032 user 0m1.294s 00:20:48.032 sys 0m0.166s 00:20:48.032 03:49:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:48.032 03:49:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:48.032 ************************************ 00:20:48.032 END TEST test_create_multi_ublk 00:20:48.032 ************************************ 00:20:48.032 03:49:02 ublk -- common/autotest_common.sh@1142 -- # return 0 00:20:48.032 03:49:02 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:20:48.032 03:49:02 ublk -- ublk/ublk.sh@147 -- # cleanup 00:20:48.032 03:49:02 ublk -- ublk/ublk.sh@130 -- # killprocess 78325 00:20:48.032 03:49:02 ublk -- common/autotest_common.sh@948 -- # '[' -z 78325 ']' 00:20:48.032 03:49:02 ublk -- common/autotest_common.sh@952 -- # kill -0 78325 00:20:48.032 03:49:02 ublk -- common/autotest_common.sh@953 -- # uname 00:20:48.032 03:49:02 ublk -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:48.032 03:49:02 ublk -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78325 00:20:48.032 03:49:02 ublk -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:48.032 killing process with pid 78325 00:20:48.032 03:49:02 ublk -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:48.032 03:49:02 ublk -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78325' 00:20:48.032 03:49:02 ublk -- common/autotest_common.sh@967 -- # kill 78325 00:20:48.032 03:49:02 ublk -- common/autotest_common.sh@972 -- # wait 78325 00:20:49.406 [2024-07-26 03:49:03.907363] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:20:49.407 [2024-07-26 03:49:03.907479] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:20:50.340 00:20:50.340 real 0m27.490s 00:20:50.340 user 0m42.027s 00:20:50.340 sys 0m7.619s 00:20:50.340 03:49:05 ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:50.340 03:49:05 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:50.340 ************************************ 00:20:50.340 END TEST ublk 00:20:50.340 ************************************ 00:20:50.340 03:49:05 -- common/autotest_common.sh@1142 -- # return 0 00:20:50.340 03:49:05 -- spdk/autotest.sh@252 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:20:50.340 03:49:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:50.340 03:49:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:50.340 03:49:05 -- common/autotest_common.sh@10 -- # set +x 00:20:50.340 ************************************ 00:20:50.340 START TEST ublk_recovery 00:20:50.340 ************************************ 00:20:50.340 03:49:05 ublk_recovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:20:50.340 * Looking for test storage... 00:20:50.340 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:20:50.340 03:49:05 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:20:50.340 03:49:05 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:20:50.340 03:49:05 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:20:50.340 03:49:05 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:20:50.340 03:49:05 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:20:50.340 03:49:05 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:20:50.340 03:49:05 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:20:50.340 03:49:05 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:20:50.340 03:49:05 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:20:50.340 03:49:05 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:20:50.340 03:49:05 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=78710 00:20:50.340 03:49:05 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:50.340 03:49:05 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 78710 00:20:50.340 03:49:05 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:20:50.340 03:49:05 ublk_recovery -- common/autotest_common.sh@829 -- # '[' -z 78710 ']' 00:20:50.340 03:49:05 ublk_recovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.340 03:49:05 ublk_recovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:50.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.340 03:49:05 ublk_recovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.340 03:49:05 ublk_recovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:50.340 03:49:05 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:50.598 [2024-07-26 03:49:05.307948] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:20:50.598 [2024-07-26 03:49:05.308127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78710 ] 00:20:50.598 [2024-07-26 03:49:05.477409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:50.856 [2024-07-26 03:49:05.669365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.856 [2024-07-26 03:49:05.669369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.789 03:49:06 ublk_recovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:51.789 03:49:06 ublk_recovery -- common/autotest_common.sh@862 -- # return 0 00:20:51.790 03:49:06 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:20:51.790 03:49:06 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.790 03:49:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:51.790 [2024-07-26 03:49:06.397901] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:51.790 [2024-07-26 03:49:06.400423] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:51.790 03:49:06 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.790 03:49:06 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:20:51.790 03:49:06 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.790 03:49:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:51.790 malloc0 00:20:51.790 03:49:06 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.790 03:49:06 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:20:51.790 03:49:06 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.790 03:49:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:51.790 [2024-07-26 03:49:06.538164] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:20:51.790 [2024-07-26 03:49:06.538394] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:20:51.790 [2024-07-26 03:49:06.538409] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:20:51.790 [2024-07-26 03:49:06.538422] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:20:51.790 [2024-07-26 03:49:06.548960] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:51.790 [2024-07-26 03:49:06.548998] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:51.790 [2024-07-26 03:49:06.556859] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:51.790 [2024-07-26 03:49:06.557053] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:20:51.790 [2024-07-26 03:49:06.566989] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:20:51.790 1 00:20:51.790 03:49:06 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.790 03:49:06 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:20:52.723 03:49:07 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=78745 00:20:52.723 03:49:07 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:20:52.723 03:49:07 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:20:52.981 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:52.981 fio-3.35 00:20:52.981 Starting 1 process 00:20:58.246 03:49:12 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 78710 00:20:58.246 03:49:12 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:21:03.512 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 78710 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:21:03.512 03:49:17 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=78851 00:21:03.512 03:49:17 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:03.512 03:49:17 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 78851 00:21:03.512 03:49:17 ublk_recovery -- common/autotest_common.sh@829 -- # '[' -z 78851 ']' 00:21:03.512 03:49:17 ublk_recovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.512 03:49:17 ublk_recovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:03.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.512 03:49:17 ublk_recovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.512 03:49:17 ublk_recovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:03.512 03:49:17 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:03.512 03:49:17 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:21:03.512 [2024-07-26 03:49:17.710071] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:21:03.512 [2024-07-26 03:49:17.710252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78851 ] 00:21:03.512 [2024-07-26 03:49:17.884061] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:03.512 [2024-07-26 03:49:18.139537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:03.512 [2024-07-26 03:49:18.139548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:04.079 03:49:18 ublk_recovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:04.079 03:49:18 ublk_recovery -- common/autotest_common.sh@862 -- # return 0 00:21:04.079 03:49:18 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:21:04.079 03:49:18 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.079 03:49:18 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:04.079 [2024-07-26 03:49:18.847839] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:04.079 [2024-07-26 03:49:18.850223] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:04.079 03:49:18 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.079 03:49:18 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:21:04.079 03:49:18 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.079 03:49:18 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:04.079 malloc0 00:21:04.079 03:49:18 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.079 03:49:18 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:21:04.079 03:49:18 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.079 03:49:18 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:04.079 [2024-07-26 03:49:18.976039] ublk.c:2077:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:21:04.079 [2024-07-26 03:49:18.976097] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:21:04.079 [2024-07-26 03:49:18.976110] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:21:04.337 [2024-07-26 03:49:18.983893] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:21:04.337 [2024-07-26 03:49:18.983920] ublk.c:2006:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:21:04.337 [2024-07-26 03:49:18.984019] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:21:04.337 1 00:21:04.337 03:49:18 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.337 03:49:18 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 78745 00:21:04.337 [2024-07-26 03:49:18.991917] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:21:04.337 [2024-07-26 03:49:18.999022] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:21:04.337 [2024-07-26 03:49:19.007173] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:21:04.337 [2024-07-26 03:49:19.007209] ublk.c: 379:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:22:00.584 00:22:00.584 fio_test: (groupid=0, jobs=1): err= 0: pid=78748: Fri Jul 26 03:50:07 2024 00:22:00.584 read: IOPS=17.9k, BW=69.8MiB/s (73.2MB/s)(4189MiB/60004msec) 00:22:00.584 slat (nsec): min=1917, max=225364, avg=6480.08, stdev=2748.80 00:22:00.584 clat (usec): min=1220, max=6439.1k, avg=3510.52, stdev=48923.93 00:22:00.584 lat (usec): min=1225, max=6439.1k, avg=3517.00, stdev=48923.92 00:22:00.584 clat percentiles (usec): 00:22:00.584 | 1.00th=[ 2606], 5.00th=[ 2835], 10.00th=[ 2868], 20.00th=[ 2933], 00:22:00.584 | 30.00th=[ 2966], 40.00th=[ 2999], 50.00th=[ 3032], 60.00th=[ 3064], 00:22:00.584 | 70.00th=[ 3097], 80.00th=[ 3130], 90.00th=[ 3326], 95.00th=[ 4047], 00:22:00.584 | 99.00th=[ 5669], 99.50th=[ 6325], 99.90th=[ 7701], 99.95th=[ 8979], 00:22:00.584 | 99.99th=[13698] 00:22:00.584 bw ( KiB/s): min=23352, max=82992, per=100.00%, avg=79528.86, stdev=7707.06, samples=107 00:22:00.584 iops : min= 5838, max=20748, avg=19882.21, stdev=1926.77, samples=107 00:22:00.584 write: IOPS=17.9k, BW=69.8MiB/s (73.2MB/s)(4187MiB/60004msec); 0 zone resets 00:22:00.584 slat (usec): min=2, max=248, avg= 6.52, stdev= 2.79 00:22:00.584 clat (usec): min=874, max=6439.4k, avg=3638.04, stdev=50491.50 00:22:00.584 lat (usec): min=893, max=6439.4k, avg=3644.56, stdev=50491.49 00:22:00.584 clat percentiles (usec): 00:22:00.584 | 1.00th=[ 2606], 5.00th=[ 2933], 10.00th=[ 2999], 20.00th=[ 3064], 00:22:00.584 | 30.00th=[ 3097], 40.00th=[ 3130], 50.00th=[ 3130], 60.00th=[ 3163], 00:22:00.584 | 70.00th=[ 3195], 80.00th=[ 3261], 90.00th=[ 3425], 95.00th=[ 3949], 00:22:00.584 | 99.00th=[ 5669], 99.50th=[ 6456], 99.90th=[ 7832], 99.95th=[ 8979], 00:22:00.584 | 99.99th=[13960] 00:22:00.584 bw ( KiB/s): min=23744, max=83768, per=100.00%, avg=79459.55, stdev=7661.61, samples=107 00:22:00.584 iops : min= 5936, max=20942, avg=19864.91, stdev=1915.41, samples=107 00:22:00.584 lat (usec) : 1000=0.01% 00:22:00.584 lat (msec) : 2=0.06%, 4=94.95%, 10=4.95%, 20=0.04%, >=2000=0.01% 00:22:00.584 cpu : usr=9.83%, sys=21.82%, ctx=71618, majf=0, minf=13 00:22:00.584 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:22:00.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:00.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:00.584 issued rwts: total=1072506,1071878,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:00.584 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:00.584 00:22:00.584 Run status group 0 (all jobs): 00:22:00.584 READ: bw=69.8MiB/s (73.2MB/s), 69.8MiB/s-69.8MiB/s (73.2MB/s-73.2MB/s), io=4189MiB (4393MB), run=60004-60004msec 00:22:00.584 WRITE: bw=69.8MiB/s (73.2MB/s), 69.8MiB/s-69.8MiB/s (73.2MB/s-73.2MB/s), io=4187MiB (4390MB), run=60004-60004msec 00:22:00.584 00:22:00.584 Disk stats (read/write): 00:22:00.584 ublkb1: ios=1070213/1069467, merge=0/0, ticks=3664935/3679068, in_queue=7344003, util=99.94% 00:22:00.584 03:50:07 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:22:00.584 03:50:07 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.584 03:50:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:00.584 [2024-07-26 03:50:07.836602] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:22:00.584 [2024-07-26 03:50:07.870978] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:00.584 [2024-07-26 03:50:07.871244] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:22:00.584 [2024-07-26 03:50:07.878881] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:00.584 [2024-07-26 03:50:07.879014] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:22:00.584 [2024-07-26 03:50:07.879033] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:22:00.584 03:50:07 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.584 03:50:07 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:22:00.584 03:50:07 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.584 03:50:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:00.584 [2024-07-26 03:50:07.894956] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:22:00.584 [2024-07-26 03:50:07.902841] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:22:00.584 [2024-07-26 03:50:07.902886] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:22:00.584 03:50:07 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.584 03:50:07 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:22:00.584 03:50:07 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:22:00.584 03:50:07 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 78851 00:22:00.584 03:50:07 ublk_recovery -- common/autotest_common.sh@948 -- # '[' -z 78851 ']' 00:22:00.584 03:50:07 ublk_recovery -- common/autotest_common.sh@952 -- # kill -0 78851 00:22:00.584 03:50:07 ublk_recovery -- common/autotest_common.sh@953 -- # uname 00:22:00.584 03:50:07 ublk_recovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:00.584 03:50:07 ublk_recovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78851 00:22:00.584 03:50:07 ublk_recovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:00.584 killing process with pid 78851 00:22:00.584 03:50:07 ublk_recovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:00.584 03:50:07 ublk_recovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78851' 00:22:00.584 03:50:07 ublk_recovery -- common/autotest_common.sh@967 -- # kill 78851 00:22:00.584 03:50:07 ublk_recovery -- common/autotest_common.sh@972 -- # wait 78851 00:22:00.584 [2024-07-26 03:50:08.895463] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:22:00.584 [2024-07-26 03:50:08.895540] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:22:00.584 ************************************ 00:22:00.584 END TEST ublk_recovery 00:22:00.584 ************************************ 00:22:00.584 00:22:00.584 real 1m5.078s 00:22:00.584 user 1m48.084s 00:22:00.584 sys 0m29.991s 00:22:00.584 03:50:10 ublk_recovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:00.584 03:50:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:00.584 03:50:10 -- common/autotest_common.sh@1142 -- # return 0 00:22:00.584 03:50:10 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:22:00.584 03:50:10 -- spdk/autotest.sh@260 -- # timing_exit lib 00:22:00.584 03:50:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:00.584 03:50:10 -- common/autotest_common.sh@10 -- # set +x 00:22:00.584 03:50:10 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:22:00.584 03:50:10 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:22:00.584 03:50:10 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:22:00.584 03:50:10 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:22:00.584 03:50:10 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:22:00.584 03:50:10 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:22:00.584 03:50:10 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:22:00.584 03:50:10 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:22:00.584 03:50:10 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:22:00.584 03:50:10 -- spdk/autotest.sh@339 -- # '[' 1 -eq 1 ']' 00:22:00.584 03:50:10 -- spdk/autotest.sh@340 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:22:00.584 03:50:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:00.584 03:50:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:00.584 03:50:10 -- common/autotest_common.sh@10 -- # set +x 00:22:00.584 ************************************ 00:22:00.584 START TEST ftl 00:22:00.584 ************************************ 00:22:00.584 03:50:10 ftl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:22:00.584 * Looking for test storage... 00:22:00.584 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:00.584 03:50:10 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:00.584 03:50:10 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:22:00.584 03:50:10 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:00.584 03:50:10 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:00.584 03:50:10 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:00.584 03:50:10 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:00.584 03:50:10 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:00.585 03:50:10 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:00.585 03:50:10 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:00.585 03:50:10 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:00.585 03:50:10 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:00.585 03:50:10 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:00.585 03:50:10 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:00.585 03:50:10 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:00.585 03:50:10 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:00.585 03:50:10 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:00.585 03:50:10 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:00.585 03:50:10 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:00.585 03:50:10 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:00.585 03:50:10 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:00.585 03:50:10 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:00.585 03:50:10 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:00.585 03:50:10 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:00.585 03:50:10 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:00.585 03:50:10 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:00.585 03:50:10 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:00.585 03:50:10 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:00.585 03:50:10 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:00.585 03:50:10 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:00.585 03:50:10 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:00.585 03:50:10 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:22:00.585 03:50:10 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:22:00.585 03:50:10 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:22:00.585 03:50:10 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:22:00.585 03:50:10 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:00.585 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:00.585 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:00.585 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:00.585 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:00.585 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:00.585 03:50:10 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=79628 00:22:00.585 03:50:10 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:22:00.585 03:50:10 ftl -- ftl/ftl.sh@38 -- # waitforlisten 79628 00:22:00.585 03:50:10 ftl -- common/autotest_common.sh@829 -- # '[' -z 79628 ']' 00:22:00.585 03:50:10 ftl -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.585 03:50:10 ftl -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:00.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.585 03:50:10 ftl -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.585 03:50:10 ftl -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:00.585 03:50:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:00.585 [2024-07-26 03:50:10.993010] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:22:00.585 [2024-07-26 03:50:10.993898] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79628 ] 00:22:00.585 [2024-07-26 03:50:11.164660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.585 [2024-07-26 03:50:11.394422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.585 03:50:11 ftl -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:00.585 03:50:11 ftl -- common/autotest_common.sh@862 -- # return 0 00:22:00.585 03:50:11 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:22:00.585 03:50:12 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:22:00.585 03:50:13 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:22:00.585 03:50:13 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:00.585 03:50:13 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:22:00.585 03:50:13 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:22:00.585 03:50:13 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:22:00.585 03:50:13 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:22:00.585 03:50:13 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:22:00.585 03:50:13 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:22:00.585 03:50:13 ftl -- ftl/ftl.sh@50 -- # break 00:22:00.585 03:50:13 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:22:00.585 03:50:13 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:22:00.585 03:50:13 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:22:00.585 03:50:13 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:22:00.585 03:50:14 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:22:00.585 03:50:14 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:22:00.585 03:50:14 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:22:00.585 03:50:14 ftl -- ftl/ftl.sh@63 -- # break 00:22:00.585 03:50:14 ftl -- ftl/ftl.sh@66 -- # killprocess 79628 00:22:00.585 03:50:14 ftl -- common/autotest_common.sh@948 -- # '[' -z 79628 ']' 00:22:00.585 03:50:14 ftl -- common/autotest_common.sh@952 -- # kill -0 79628 00:22:00.585 03:50:14 ftl -- common/autotest_common.sh@953 -- # uname 00:22:00.585 03:50:14 ftl -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:00.585 03:50:14 ftl -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79628 00:22:00.585 killing process with pid 79628 00:22:00.585 03:50:14 ftl -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:00.585 03:50:14 ftl -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:00.585 03:50:14 ftl -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79628' 00:22:00.585 03:50:14 ftl -- common/autotest_common.sh@967 -- # kill 79628 00:22:00.585 03:50:14 ftl -- common/autotest_common.sh@972 -- # wait 79628 00:22:01.521 03:50:16 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:22:01.521 03:50:16 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:22:01.521 03:50:16 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:22:01.521 03:50:16 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:01.521 03:50:16 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:01.521 ************************************ 00:22:01.521 START TEST ftl_fio_basic 00:22:01.521 ************************************ 00:22:01.521 03:50:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:22:01.521 * Looking for test storage... 00:22:01.780 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:01.780 03:50:16 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:01.780 03:50:16 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:22:01.780 03:50:16 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:01.780 03:50:16 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:01.780 03:50:16 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:01.780 03:50:16 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:01.780 03:50:16 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:01.780 03:50:16 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:01.780 03:50:16 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:01.780 03:50:16 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:01.780 03:50:16 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:22:01.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=79769 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 79769 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- common/autotest_common.sh@829 -- # '[' -z 79769 ']' 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:01.781 03:50:16 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:01.781 [2024-07-26 03:50:16.575351] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:22:01.781 [2024-07-26 03:50:16.575527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79769 ] 00:22:02.040 [2024-07-26 03:50:16.751164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:02.298 [2024-07-26 03:50:16.990249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.298 [2024-07-26 03:50:16.990346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.298 [2024-07-26 03:50:16.990358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.874 03:50:17 ftl.ftl_fio_basic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:02.874 03:50:17 ftl.ftl_fio_basic -- common/autotest_common.sh@862 -- # return 0 00:22:02.874 03:50:17 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:02.874 03:50:17 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:22:02.874 03:50:17 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:02.874 03:50:17 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:22:02.874 03:50:17 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:22:02.874 03:50:17 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:03.457 03:50:18 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:03.457 03:50:18 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:22:03.457 03:50:18 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:03.457 03:50:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:22:03.457 03:50:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:03.457 03:50:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:22:03.457 03:50:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:22:03.457 03:50:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:03.715 03:50:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:03.715 { 00:22:03.715 "name": "nvme0n1", 00:22:03.715 "aliases": [ 00:22:03.715 "cce92dcd-58a5-403a-83f2-dfaae89de1e9" 00:22:03.715 ], 00:22:03.715 "product_name": "NVMe disk", 00:22:03.715 "block_size": 4096, 00:22:03.715 "num_blocks": 1310720, 00:22:03.715 "uuid": "cce92dcd-58a5-403a-83f2-dfaae89de1e9", 00:22:03.715 "assigned_rate_limits": { 00:22:03.715 "rw_ios_per_sec": 0, 00:22:03.715 "rw_mbytes_per_sec": 0, 00:22:03.715 "r_mbytes_per_sec": 0, 00:22:03.715 "w_mbytes_per_sec": 0 00:22:03.715 }, 00:22:03.715 "claimed": false, 00:22:03.715 "zoned": false, 00:22:03.715 "supported_io_types": { 00:22:03.715 "read": true, 00:22:03.716 "write": true, 00:22:03.716 "unmap": true, 00:22:03.716 "flush": true, 00:22:03.716 "reset": true, 00:22:03.716 "nvme_admin": true, 00:22:03.716 "nvme_io": true, 00:22:03.716 "nvme_io_md": false, 00:22:03.716 "write_zeroes": true, 00:22:03.716 "zcopy": false, 00:22:03.716 "get_zone_info": false, 00:22:03.716 "zone_management": false, 00:22:03.716 "zone_append": false, 00:22:03.716 "compare": true, 00:22:03.716 "compare_and_write": false, 00:22:03.716 "abort": true, 00:22:03.716 "seek_hole": false, 00:22:03.716 "seek_data": false, 00:22:03.716 "copy": true, 00:22:03.716 "nvme_iov_md": false 00:22:03.716 }, 00:22:03.716 "driver_specific": { 00:22:03.716 "nvme": [ 00:22:03.716 { 00:22:03.716 "pci_address": "0000:00:11.0", 00:22:03.716 "trid": { 00:22:03.716 "trtype": "PCIe", 00:22:03.716 "traddr": "0000:00:11.0" 00:22:03.716 }, 00:22:03.716 "ctrlr_data": { 00:22:03.716 "cntlid": 0, 00:22:03.716 "vendor_id": "0x1b36", 00:22:03.716 "model_number": "QEMU NVMe Ctrl", 00:22:03.716 "serial_number": "12341", 00:22:03.716 "firmware_revision": "8.0.0", 00:22:03.716 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:03.716 "oacs": { 00:22:03.716 "security": 0, 00:22:03.716 "format": 1, 00:22:03.716 "firmware": 0, 00:22:03.716 "ns_manage": 1 00:22:03.716 }, 00:22:03.716 "multi_ctrlr": false, 00:22:03.716 "ana_reporting": false 00:22:03.716 }, 00:22:03.716 "vs": { 00:22:03.716 "nvme_version": "1.4" 00:22:03.716 }, 00:22:03.716 "ns_data": { 00:22:03.716 "id": 1, 00:22:03.716 "can_share": false 00:22:03.716 } 00:22:03.716 } 00:22:03.716 ], 00:22:03.716 "mp_policy": "active_passive" 00:22:03.716 } 00:22:03.716 } 00:22:03.716 ]' 00:22:03.716 03:50:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:03.716 03:50:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:22:03.716 03:50:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:03.716 03:50:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:22:03.716 03:50:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:22:03.716 03:50:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:22:03.716 03:50:18 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:22:03.716 03:50:18 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:03.716 03:50:18 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:22:03.716 03:50:18 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:03.716 03:50:18 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:03.974 03:50:18 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:22:03.974 03:50:18 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:04.233 03:50:19 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=03485813-7c97-471a-a5eb-cc265ee2a9d5 00:22:04.233 03:50:19 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 03485813-7c97-471a-a5eb-cc265ee2a9d5 00:22:04.492 03:50:19 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=b29aad83-e204-42b9-9d7c-f6bca0c21d34 00:22:04.492 03:50:19 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b29aad83-e204-42b9-9d7c-f6bca0c21d34 00:22:04.492 03:50:19 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:22:04.492 03:50:19 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:04.492 03:50:19 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=b29aad83-e204-42b9-9d7c-f6bca0c21d34 00:22:04.492 03:50:19 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:22:04.493 03:50:19 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size b29aad83-e204-42b9-9d7c-f6bca0c21d34 00:22:04.493 03:50:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=b29aad83-e204-42b9-9d7c-f6bca0c21d34 00:22:04.493 03:50:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:04.493 03:50:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:22:04.493 03:50:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:22:04.493 03:50:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b29aad83-e204-42b9-9d7c-f6bca0c21d34 00:22:04.751 03:50:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:04.751 { 00:22:04.751 "name": "b29aad83-e204-42b9-9d7c-f6bca0c21d34", 00:22:04.751 "aliases": [ 00:22:04.751 "lvs/nvme0n1p0" 00:22:04.751 ], 00:22:04.751 "product_name": "Logical Volume", 00:22:04.751 "block_size": 4096, 00:22:04.751 "num_blocks": 26476544, 00:22:04.751 "uuid": "b29aad83-e204-42b9-9d7c-f6bca0c21d34", 00:22:04.751 "assigned_rate_limits": { 00:22:04.751 "rw_ios_per_sec": 0, 00:22:04.751 "rw_mbytes_per_sec": 0, 00:22:04.751 "r_mbytes_per_sec": 0, 00:22:04.751 "w_mbytes_per_sec": 0 00:22:04.751 }, 00:22:04.751 "claimed": false, 00:22:04.751 "zoned": false, 00:22:04.751 "supported_io_types": { 00:22:04.751 "read": true, 00:22:04.751 "write": true, 00:22:04.751 "unmap": true, 00:22:04.751 "flush": false, 00:22:04.751 "reset": true, 00:22:04.751 "nvme_admin": false, 00:22:04.751 "nvme_io": false, 00:22:04.751 "nvme_io_md": false, 00:22:04.751 "write_zeroes": true, 00:22:04.751 "zcopy": false, 00:22:04.751 "get_zone_info": false, 00:22:04.751 "zone_management": false, 00:22:04.751 "zone_append": false, 00:22:04.751 "compare": false, 00:22:04.751 "compare_and_write": false, 00:22:04.751 "abort": false, 00:22:04.751 "seek_hole": true, 00:22:04.751 "seek_data": true, 00:22:04.751 "copy": false, 00:22:04.751 "nvme_iov_md": false 00:22:04.751 }, 00:22:04.751 "driver_specific": { 00:22:04.751 "lvol": { 00:22:04.751 "lvol_store_uuid": "03485813-7c97-471a-a5eb-cc265ee2a9d5", 00:22:04.751 "base_bdev": "nvme0n1", 00:22:04.751 "thin_provision": true, 00:22:04.751 "num_allocated_clusters": 0, 00:22:04.751 "snapshot": false, 00:22:04.751 "clone": false, 00:22:04.751 "esnap_clone": false 00:22:04.751 } 00:22:04.751 } 00:22:04.751 } 00:22:04.751 ]' 00:22:04.751 03:50:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:04.751 03:50:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:22:04.751 03:50:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:05.009 03:50:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:05.009 03:50:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:05.009 03:50:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:22:05.009 03:50:19 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:22:05.009 03:50:19 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:22:05.009 03:50:19 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:05.266 03:50:20 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:05.266 03:50:20 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:05.266 03:50:20 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size b29aad83-e204-42b9-9d7c-f6bca0c21d34 00:22:05.266 03:50:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=b29aad83-e204-42b9-9d7c-f6bca0c21d34 00:22:05.266 03:50:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:05.266 03:50:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:22:05.266 03:50:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:22:05.266 03:50:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b29aad83-e204-42b9-9d7c-f6bca0c21d34 00:22:05.524 03:50:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:05.524 { 00:22:05.524 "name": "b29aad83-e204-42b9-9d7c-f6bca0c21d34", 00:22:05.524 "aliases": [ 00:22:05.524 "lvs/nvme0n1p0" 00:22:05.524 ], 00:22:05.524 "product_name": "Logical Volume", 00:22:05.524 "block_size": 4096, 00:22:05.524 "num_blocks": 26476544, 00:22:05.524 "uuid": "b29aad83-e204-42b9-9d7c-f6bca0c21d34", 00:22:05.524 "assigned_rate_limits": { 00:22:05.524 "rw_ios_per_sec": 0, 00:22:05.524 "rw_mbytes_per_sec": 0, 00:22:05.524 "r_mbytes_per_sec": 0, 00:22:05.524 "w_mbytes_per_sec": 0 00:22:05.524 }, 00:22:05.524 "claimed": false, 00:22:05.524 "zoned": false, 00:22:05.524 "supported_io_types": { 00:22:05.524 "read": true, 00:22:05.524 "write": true, 00:22:05.524 "unmap": true, 00:22:05.524 "flush": false, 00:22:05.524 "reset": true, 00:22:05.524 "nvme_admin": false, 00:22:05.524 "nvme_io": false, 00:22:05.524 "nvme_io_md": false, 00:22:05.524 "write_zeroes": true, 00:22:05.524 "zcopy": false, 00:22:05.524 "get_zone_info": false, 00:22:05.524 "zone_management": false, 00:22:05.524 "zone_append": false, 00:22:05.524 "compare": false, 00:22:05.524 "compare_and_write": false, 00:22:05.524 "abort": false, 00:22:05.524 "seek_hole": true, 00:22:05.524 "seek_data": true, 00:22:05.524 "copy": false, 00:22:05.524 "nvme_iov_md": false 00:22:05.524 }, 00:22:05.524 "driver_specific": { 00:22:05.524 "lvol": { 00:22:05.524 "lvol_store_uuid": "03485813-7c97-471a-a5eb-cc265ee2a9d5", 00:22:05.524 "base_bdev": "nvme0n1", 00:22:05.524 "thin_provision": true, 00:22:05.524 "num_allocated_clusters": 0, 00:22:05.524 "snapshot": false, 00:22:05.524 "clone": false, 00:22:05.524 "esnap_clone": false 00:22:05.524 } 00:22:05.524 } 00:22:05.524 } 00:22:05.524 ]' 00:22:05.524 03:50:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:05.524 03:50:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:22:05.524 03:50:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:05.782 03:50:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:05.782 03:50:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:05.782 03:50:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:22:05.782 03:50:20 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:22:05.782 03:50:20 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:06.040 03:50:20 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:22:06.040 03:50:20 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:22:06.040 03:50:20 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:22:06.040 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:22:06.040 03:50:20 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size b29aad83-e204-42b9-9d7c-f6bca0c21d34 00:22:06.040 03:50:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=b29aad83-e204-42b9-9d7c-f6bca0c21d34 00:22:06.040 03:50:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:06.040 03:50:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:22:06.040 03:50:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:22:06.040 03:50:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b29aad83-e204-42b9-9d7c-f6bca0c21d34 00:22:06.299 03:50:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:06.299 { 00:22:06.299 "name": "b29aad83-e204-42b9-9d7c-f6bca0c21d34", 00:22:06.299 "aliases": [ 00:22:06.299 "lvs/nvme0n1p0" 00:22:06.299 ], 00:22:06.299 "product_name": "Logical Volume", 00:22:06.299 "block_size": 4096, 00:22:06.299 "num_blocks": 26476544, 00:22:06.299 "uuid": "b29aad83-e204-42b9-9d7c-f6bca0c21d34", 00:22:06.299 "assigned_rate_limits": { 00:22:06.299 "rw_ios_per_sec": 0, 00:22:06.299 "rw_mbytes_per_sec": 0, 00:22:06.299 "r_mbytes_per_sec": 0, 00:22:06.299 "w_mbytes_per_sec": 0 00:22:06.299 }, 00:22:06.299 "claimed": false, 00:22:06.299 "zoned": false, 00:22:06.299 "supported_io_types": { 00:22:06.299 "read": true, 00:22:06.299 "write": true, 00:22:06.299 "unmap": true, 00:22:06.299 "flush": false, 00:22:06.299 "reset": true, 00:22:06.299 "nvme_admin": false, 00:22:06.299 "nvme_io": false, 00:22:06.299 "nvme_io_md": false, 00:22:06.299 "write_zeroes": true, 00:22:06.299 "zcopy": false, 00:22:06.299 "get_zone_info": false, 00:22:06.299 "zone_management": false, 00:22:06.299 "zone_append": false, 00:22:06.299 "compare": false, 00:22:06.299 "compare_and_write": false, 00:22:06.299 "abort": false, 00:22:06.299 "seek_hole": true, 00:22:06.299 "seek_data": true, 00:22:06.299 "copy": false, 00:22:06.299 "nvme_iov_md": false 00:22:06.299 }, 00:22:06.299 "driver_specific": { 00:22:06.299 "lvol": { 00:22:06.299 "lvol_store_uuid": "03485813-7c97-471a-a5eb-cc265ee2a9d5", 00:22:06.299 "base_bdev": "nvme0n1", 00:22:06.299 "thin_provision": true, 00:22:06.299 "num_allocated_clusters": 0, 00:22:06.300 "snapshot": false, 00:22:06.300 "clone": false, 00:22:06.300 "esnap_clone": false 00:22:06.300 } 00:22:06.300 } 00:22:06.300 } 00:22:06.300 ]' 00:22:06.300 03:50:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:06.300 03:50:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:22:06.300 03:50:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:06.300 03:50:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:06.300 03:50:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:06.300 03:50:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:22:06.300 03:50:21 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:22:06.300 03:50:21 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:22:06.300 03:50:21 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b29aad83-e204-42b9-9d7c-f6bca0c21d34 -c nvc0n1p0 --l2p_dram_limit 60 00:22:06.559 [2024-07-26 03:50:21.370998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.559 [2024-07-26 03:50:21.371636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:06.559 [2024-07-26 03:50:21.371678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:06.559 [2024-07-26 03:50:21.371696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.559 [2024-07-26 03:50:21.371842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.559 [2024-07-26 03:50:21.371872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:06.559 [2024-07-26 03:50:21.371887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:22:06.559 [2024-07-26 03:50:21.371901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.559 [2024-07-26 03:50:21.371944] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:06.559 [2024-07-26 03:50:21.372927] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:06.559 [2024-07-26 03:50:21.372962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.559 [2024-07-26 03:50:21.372981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:06.559 [2024-07-26 03:50:21.372996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.027 ms 00:22:06.559 [2024-07-26 03:50:21.373010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.559 [2024-07-26 03:50:21.373145] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 338eb1a1-39b2-4ca2-be3f-5149e9b61fc2 00:22:06.559 [2024-07-26 03:50:21.374249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.559 [2024-07-26 03:50:21.374291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:06.559 [2024-07-26 03:50:21.374316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:22:06.559 [2024-07-26 03:50:21.374331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.559 [2024-07-26 03:50:21.378925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.559 [2024-07-26 03:50:21.378976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:06.559 [2024-07-26 03:50:21.379004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.484 ms 00:22:06.559 [2024-07-26 03:50:21.379018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.559 [2024-07-26 03:50:21.379156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.559 [2024-07-26 03:50:21.379179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:06.559 [2024-07-26 03:50:21.379195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:22:06.559 [2024-07-26 03:50:21.379208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.559 [2024-07-26 03:50:21.379312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.559 [2024-07-26 03:50:21.379331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:06.559 [2024-07-26 03:50:21.379347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:22:06.559 [2024-07-26 03:50:21.379362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.559 [2024-07-26 03:50:21.379407] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:06.559 [2024-07-26 03:50:21.383977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.559 [2024-07-26 03:50:21.384029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:06.559 [2024-07-26 03:50:21.384047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.584 ms 00:22:06.559 [2024-07-26 03:50:21.384061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.559 [2024-07-26 03:50:21.384115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.559 [2024-07-26 03:50:21.384134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:06.559 [2024-07-26 03:50:21.384148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:06.559 [2024-07-26 03:50:21.384162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.559 [2024-07-26 03:50:21.384221] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:06.559 [2024-07-26 03:50:21.384409] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:06.559 [2024-07-26 03:50:21.384432] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:06.559 [2024-07-26 03:50:21.384454] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:06.559 [2024-07-26 03:50:21.384471] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:06.559 [2024-07-26 03:50:21.384490] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:06.559 [2024-07-26 03:50:21.384503] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:06.559 [2024-07-26 03:50:21.384516] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:06.559 [2024-07-26 03:50:21.384531] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:06.559 [2024-07-26 03:50:21.384544] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:06.559 [2024-07-26 03:50:21.384557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.559 [2024-07-26 03:50:21.384571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:06.559 [2024-07-26 03:50:21.384585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:22:06.559 [2024-07-26 03:50:21.384598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.559 [2024-07-26 03:50:21.384718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.559 [2024-07-26 03:50:21.384738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:06.559 [2024-07-26 03:50:21.384751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:22:06.559 [2024-07-26 03:50:21.384765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.559 [2024-07-26 03:50:21.384925] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:06.559 [2024-07-26 03:50:21.384952] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:06.559 [2024-07-26 03:50:21.384966] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:06.559 [2024-07-26 03:50:21.384981] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.559 [2024-07-26 03:50:21.384993] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:06.559 [2024-07-26 03:50:21.385006] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:06.559 [2024-07-26 03:50:21.385018] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:06.559 [2024-07-26 03:50:21.385031] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:06.559 [2024-07-26 03:50:21.385043] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:06.559 [2024-07-26 03:50:21.385059] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:06.559 [2024-07-26 03:50:21.385070] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:06.559 [2024-07-26 03:50:21.385094] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:06.559 [2024-07-26 03:50:21.385105] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:06.559 [2024-07-26 03:50:21.385119] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:06.559 [2024-07-26 03:50:21.385130] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:06.559 [2024-07-26 03:50:21.385142] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.559 [2024-07-26 03:50:21.385153] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:06.559 [2024-07-26 03:50:21.385168] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:06.559 [2024-07-26 03:50:21.385179] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.559 [2024-07-26 03:50:21.385192] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:06.559 [2024-07-26 03:50:21.385204] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:06.559 [2024-07-26 03:50:21.385216] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:06.559 [2024-07-26 03:50:21.385227] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:06.559 [2024-07-26 03:50:21.385240] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:06.559 [2024-07-26 03:50:21.385251] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:06.559 [2024-07-26 03:50:21.385264] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:06.559 [2024-07-26 03:50:21.385275] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:06.559 [2024-07-26 03:50:21.385288] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:06.560 [2024-07-26 03:50:21.385299] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:06.560 [2024-07-26 03:50:21.385312] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:06.560 [2024-07-26 03:50:21.385323] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:06.560 [2024-07-26 03:50:21.385336] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:06.560 [2024-07-26 03:50:21.385348] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:06.560 [2024-07-26 03:50:21.385363] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:06.560 [2024-07-26 03:50:21.385374] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:06.560 [2024-07-26 03:50:21.385387] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:06.560 [2024-07-26 03:50:21.385398] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:06.560 [2024-07-26 03:50:21.385412] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:06.560 [2024-07-26 03:50:21.385423] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:06.560 [2024-07-26 03:50:21.385436] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.560 [2024-07-26 03:50:21.385447] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:06.560 [2024-07-26 03:50:21.385462] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:06.560 [2024-07-26 03:50:21.385473] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.560 [2024-07-26 03:50:21.385485] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:06.560 [2024-07-26 03:50:21.385498] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:06.560 [2024-07-26 03:50:21.385535] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:06.560 [2024-07-26 03:50:21.385549] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.560 [2024-07-26 03:50:21.385563] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:06.560 [2024-07-26 03:50:21.385575] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:06.560 [2024-07-26 03:50:21.385590] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:06.560 [2024-07-26 03:50:21.385602] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:06.560 [2024-07-26 03:50:21.385615] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:06.560 [2024-07-26 03:50:21.385626] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:06.560 [2024-07-26 03:50:21.385652] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:06.560 [2024-07-26 03:50:21.385669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:06.560 [2024-07-26 03:50:21.385696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:06.560 [2024-07-26 03:50:21.385711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:06.560 [2024-07-26 03:50:21.385728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:06.560 [2024-07-26 03:50:21.385742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:06.560 [2024-07-26 03:50:21.385761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:06.560 [2024-07-26 03:50:21.385774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:06.560 [2024-07-26 03:50:21.385792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:06.560 [2024-07-26 03:50:21.385826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:06.560 [2024-07-26 03:50:21.385853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:06.560 [2024-07-26 03:50:21.385866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:06.560 [2024-07-26 03:50:21.385882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:06.560 [2024-07-26 03:50:21.385895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:06.560 [2024-07-26 03:50:21.385909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:06.560 [2024-07-26 03:50:21.385922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:06.560 [2024-07-26 03:50:21.385936] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:06.560 [2024-07-26 03:50:21.385950] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:06.560 [2024-07-26 03:50:21.385965] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:06.560 [2024-07-26 03:50:21.385978] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:06.560 [2024-07-26 03:50:21.385994] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:06.560 [2024-07-26 03:50:21.386007] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:06.560 [2024-07-26 03:50:21.386023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.560 [2024-07-26 03:50:21.386036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:06.560 [2024-07-26 03:50:21.386051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.174 ms 00:22:06.560 [2024-07-26 03:50:21.386063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.560 [2024-07-26 03:50:21.386140] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:06.560 [2024-07-26 03:50:21.386158] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:09.842 [2024-07-26 03:50:24.303549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.842 [2024-07-26 03:50:24.303619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:09.842 [2024-07-26 03:50:24.303645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2917.422 ms 00:22:09.842 [2024-07-26 03:50:24.303659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.842 [2024-07-26 03:50:24.336413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.842 [2024-07-26 03:50:24.336483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:09.842 [2024-07-26 03:50:24.336509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.451 ms 00:22:09.842 [2024-07-26 03:50:24.336523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.842 [2024-07-26 03:50:24.336713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.842 [2024-07-26 03:50:24.336733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:09.842 [2024-07-26 03:50:24.336749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:22:09.842 [2024-07-26 03:50:24.336764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.842 [2024-07-26 03:50:24.385917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.842 [2024-07-26 03:50:24.385983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:09.842 [2024-07-26 03:50:24.386008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.036 ms 00:22:09.842 [2024-07-26 03:50:24.386022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.842 [2024-07-26 03:50:24.386093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.842 [2024-07-26 03:50:24.386110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:09.842 [2024-07-26 03:50:24.386126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:09.842 [2024-07-26 03:50:24.386138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.842 [2024-07-26 03:50:24.386590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.842 [2024-07-26 03:50:24.386615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:09.842 [2024-07-26 03:50:24.386632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:22:09.842 [2024-07-26 03:50:24.386644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.842 [2024-07-26 03:50:24.386811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.842 [2024-07-26 03:50:24.386855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:09.842 [2024-07-26 03:50:24.386873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:22:09.842 [2024-07-26 03:50:24.386885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.842 [2024-07-26 03:50:24.408874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.842 [2024-07-26 03:50:24.408942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:09.842 [2024-07-26 03:50:24.408967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.927 ms 00:22:09.842 [2024-07-26 03:50:24.408980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.842 [2024-07-26 03:50:24.422456] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:09.842 [2024-07-26 03:50:24.436461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.842 [2024-07-26 03:50:24.436547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:09.842 [2024-07-26 03:50:24.436571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.322 ms 00:22:09.842 [2024-07-26 03:50:24.436587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.842 [2024-07-26 03:50:24.498745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.842 [2024-07-26 03:50:24.498849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:09.842 [2024-07-26 03:50:24.498882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.077 ms 00:22:09.842 [2024-07-26 03:50:24.498900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.842 [2024-07-26 03:50:24.499159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.842 [2024-07-26 03:50:24.499190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:09.842 [2024-07-26 03:50:24.499206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:22:09.842 [2024-07-26 03:50:24.499223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.842 [2024-07-26 03:50:24.530779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.842 [2024-07-26 03:50:24.530854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:09.842 [2024-07-26 03:50:24.530877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.479 ms 00:22:09.842 [2024-07-26 03:50:24.530901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.842 [2024-07-26 03:50:24.561619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.842 [2024-07-26 03:50:24.561672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:09.842 [2024-07-26 03:50:24.561693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.658 ms 00:22:09.842 [2024-07-26 03:50:24.561708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.842 [2024-07-26 03:50:24.562470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.842 [2024-07-26 03:50:24.562500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:09.842 [2024-07-26 03:50:24.562516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.705 ms 00:22:09.842 [2024-07-26 03:50:24.562531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.842 [2024-07-26 03:50:24.663233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.842 [2024-07-26 03:50:24.663304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:09.842 [2024-07-26 03:50:24.663327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.614 ms 00:22:09.843 [2024-07-26 03:50:24.663345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.843 [2024-07-26 03:50:24.695748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.843 [2024-07-26 03:50:24.695803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:09.843 [2024-07-26 03:50:24.695835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.339 ms 00:22:09.843 [2024-07-26 03:50:24.695852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.843 [2024-07-26 03:50:24.727123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.843 [2024-07-26 03:50:24.727176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:09.843 [2024-07-26 03:50:24.727197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.206 ms 00:22:09.843 [2024-07-26 03:50:24.727212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.102 [2024-07-26 03:50:24.758764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.102 [2024-07-26 03:50:24.758834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:10.102 [2024-07-26 03:50:24.758857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.494 ms 00:22:10.102 [2024-07-26 03:50:24.758873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.102 [2024-07-26 03:50:24.758946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.102 [2024-07-26 03:50:24.758968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:10.102 [2024-07-26 03:50:24.758983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:10.102 [2024-07-26 03:50:24.759000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.102 [2024-07-26 03:50:24.759139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.102 [2024-07-26 03:50:24.759163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:10.102 [2024-07-26 03:50:24.759178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:22:10.102 [2024-07-26 03:50:24.759192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.102 [2024-07-26 03:50:24.760296] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3388.812 ms, result 0 00:22:10.102 { 00:22:10.102 "name": "ftl0", 00:22:10.102 "uuid": "338eb1a1-39b2-4ca2-be3f-5149e9b61fc2" 00:22:10.102 } 00:22:10.102 03:50:24 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:22:10.102 03:50:24 ftl.ftl_fio_basic -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:22:10.102 03:50:24 ftl.ftl_fio_basic -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:10.102 03:50:24 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local i 00:22:10.102 03:50:24 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:10.102 03:50:24 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:10.102 03:50:24 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:22:10.360 03:50:25 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:22:10.619 [ 00:22:10.619 { 00:22:10.619 "name": "ftl0", 00:22:10.619 "aliases": [ 00:22:10.619 "338eb1a1-39b2-4ca2-be3f-5149e9b61fc2" 00:22:10.619 ], 00:22:10.619 "product_name": "FTL disk", 00:22:10.619 "block_size": 4096, 00:22:10.619 "num_blocks": 20971520, 00:22:10.619 "uuid": "338eb1a1-39b2-4ca2-be3f-5149e9b61fc2", 00:22:10.619 "assigned_rate_limits": { 00:22:10.619 "rw_ios_per_sec": 0, 00:22:10.619 "rw_mbytes_per_sec": 0, 00:22:10.619 "r_mbytes_per_sec": 0, 00:22:10.619 "w_mbytes_per_sec": 0 00:22:10.619 }, 00:22:10.619 "claimed": false, 00:22:10.619 "zoned": false, 00:22:10.619 "supported_io_types": { 00:22:10.619 "read": true, 00:22:10.619 "write": true, 00:22:10.619 "unmap": true, 00:22:10.619 "flush": true, 00:22:10.619 "reset": false, 00:22:10.619 "nvme_admin": false, 00:22:10.619 "nvme_io": false, 00:22:10.619 "nvme_io_md": false, 00:22:10.619 "write_zeroes": true, 00:22:10.619 "zcopy": false, 00:22:10.619 "get_zone_info": false, 00:22:10.619 "zone_management": false, 00:22:10.619 "zone_append": false, 00:22:10.619 "compare": false, 00:22:10.619 "compare_and_write": false, 00:22:10.619 "abort": false, 00:22:10.619 "seek_hole": false, 00:22:10.619 "seek_data": false, 00:22:10.619 "copy": false, 00:22:10.619 "nvme_iov_md": false 00:22:10.619 }, 00:22:10.619 "driver_specific": { 00:22:10.619 "ftl": { 00:22:10.619 "base_bdev": "b29aad83-e204-42b9-9d7c-f6bca0c21d34", 00:22:10.619 "cache": "nvc0n1p0" 00:22:10.619 } 00:22:10.619 } 00:22:10.619 } 00:22:10.619 ] 00:22:10.619 03:50:25 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # return 0 00:22:10.619 03:50:25 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:22:10.619 03:50:25 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:10.877 03:50:25 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:22:10.877 03:50:25 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:11.136 [2024-07-26 03:50:25.889522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.136 [2024-07-26 03:50:25.889585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:11.136 [2024-07-26 03:50:25.889616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:11.136 [2024-07-26 03:50:25.889629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.136 [2024-07-26 03:50:25.889679] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:11.136 [2024-07-26 03:50:25.893053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.136 [2024-07-26 03:50:25.893094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:11.136 [2024-07-26 03:50:25.893112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.348 ms 00:22:11.136 [2024-07-26 03:50:25.893127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.136 [2024-07-26 03:50:25.893634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.136 [2024-07-26 03:50:25.893676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:11.136 [2024-07-26 03:50:25.893693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.470 ms 00:22:11.136 [2024-07-26 03:50:25.893712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.136 [2024-07-26 03:50:25.897044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.136 [2024-07-26 03:50:25.897082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:11.136 [2024-07-26 03:50:25.897099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.301 ms 00:22:11.136 [2024-07-26 03:50:25.897114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.136 [2024-07-26 03:50:25.903808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.136 [2024-07-26 03:50:25.903856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:11.136 [2024-07-26 03:50:25.903874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.660 ms 00:22:11.136 [2024-07-26 03:50:25.903895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.136 [2024-07-26 03:50:25.935058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.136 [2024-07-26 03:50:25.935114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:11.136 [2024-07-26 03:50:25.935135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.047 ms 00:22:11.136 [2024-07-26 03:50:25.935150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.136 [2024-07-26 03:50:25.953707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.136 [2024-07-26 03:50:25.953771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:11.136 [2024-07-26 03:50:25.953792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.489 ms 00:22:11.136 [2024-07-26 03:50:25.953807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.136 [2024-07-26 03:50:25.954078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.136 [2024-07-26 03:50:25.954105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:11.136 [2024-07-26 03:50:25.954120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.177 ms 00:22:11.136 [2024-07-26 03:50:25.954134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.136 [2024-07-26 03:50:25.985179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.136 [2024-07-26 03:50:25.985235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:11.136 [2024-07-26 03:50:25.985256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.009 ms 00:22:11.136 [2024-07-26 03:50:25.985271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.136 [2024-07-26 03:50:26.016023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.136 [2024-07-26 03:50:26.016078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:11.136 [2024-07-26 03:50:26.016099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.693 ms 00:22:11.136 [2024-07-26 03:50:26.016114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.396 [2024-07-26 03:50:26.046638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.396 [2024-07-26 03:50:26.046721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:11.396 [2024-07-26 03:50:26.046743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.468 ms 00:22:11.396 [2024-07-26 03:50:26.046758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.396 [2024-07-26 03:50:26.077630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.396 [2024-07-26 03:50:26.077698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:11.396 [2024-07-26 03:50:26.077719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.698 ms 00:22:11.396 [2024-07-26 03:50:26.077735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.396 [2024-07-26 03:50:26.077796] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:11.396 [2024-07-26 03:50:26.077844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.077864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.077880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.077893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.077908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.077921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.077936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.077948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.077972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.077985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.077999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:11.396 [2024-07-26 03:50:26.078751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:11.397 [2024-07-26 03:50:26.078764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:11.397 [2024-07-26 03:50:26.078778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:11.397 [2024-07-26 03:50:26.078791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:11.397 [2024-07-26 03:50:26.078805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:11.397 [2024-07-26 03:50:26.078829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:11.397 [2024-07-26 03:50:26.078846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:11.397 [2024-07-26 03:50:26.078859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:11.397 [2024-07-26 03:50:26.078874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:11.397 [2024-07-26 03:50:26.078887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:11.397 [2024-07-26 03:50:26.078905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:11.397 [2024-07-26 03:50:26.078918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:11.397 [2024-07-26 03:50:26.078932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:11.397 [2024-07-26 03:50:26.078945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:11.397 [2024-07-26 03:50:26.078960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:11.397 [2024-07-26 03:50:26.078973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:11.397 [2024-07-26 03:50:26.078987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:11.397 [2024-07-26 03:50:26.078999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:11.397 [2024-07-26 03:50:26.079014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:11.397 [2024-07-26 03:50:26.079027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:11.397 [2024-07-26 03:50:26.079042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:11.397 [2024-07-26 03:50:26.079055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:11.397 [2024-07-26 03:50:26.079069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:11.397 [2024-07-26 03:50:26.079081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:11.397 [2024-07-26 03:50:26.079096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:11.397 [2024-07-26 03:50:26.079108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:11.397 [2024-07-26 03:50:26.079124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:11.397 [2024-07-26 03:50:26.079137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:11.397 [2024-07-26 03:50:26.079151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:11.397 [2024-07-26 03:50:26.079183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:11.397 [2024-07-26 03:50:26.079205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:11.397 [2024-07-26 03:50:26.079221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:11.397 [2024-07-26 03:50:26.079237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:11.397 [2024-07-26 03:50:26.079250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:11.397 [2024-07-26 03:50:26.079266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:11.397 [2024-07-26 03:50:26.079279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:11.397 [2024-07-26 03:50:26.079293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:11.397 [2024-07-26 03:50:26.079306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:11.397 [2024-07-26 03:50:26.079331] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:11.397 [2024-07-26 03:50:26.079344] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 338eb1a1-39b2-4ca2-be3f-5149e9b61fc2 00:22:11.397 [2024-07-26 03:50:26.079358] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:11.397 [2024-07-26 03:50:26.079373] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:11.397 [2024-07-26 03:50:26.079389] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:11.397 [2024-07-26 03:50:26.079401] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:11.397 [2024-07-26 03:50:26.079414] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:11.397 [2024-07-26 03:50:26.079427] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:11.397 [2024-07-26 03:50:26.079441] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:11.397 [2024-07-26 03:50:26.079452] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:11.397 [2024-07-26 03:50:26.079464] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:11.397 [2024-07-26 03:50:26.079477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.397 [2024-07-26 03:50:26.079491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:11.397 [2024-07-26 03:50:26.079505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.683 ms 00:22:11.397 [2024-07-26 03:50:26.079519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.397 [2024-07-26 03:50:26.096252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.397 [2024-07-26 03:50:26.096305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:11.397 [2024-07-26 03:50:26.096324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.653 ms 00:22:11.397 [2024-07-26 03:50:26.096339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.397 [2024-07-26 03:50:26.096794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.397 [2024-07-26 03:50:26.096841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:11.397 [2024-07-26 03:50:26.096861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:22:11.397 [2024-07-26 03:50:26.096875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.397 [2024-07-26 03:50:26.154899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.397 [2024-07-26 03:50:26.154973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:11.397 [2024-07-26 03:50:26.154994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.397 [2024-07-26 03:50:26.155009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.397 [2024-07-26 03:50:26.155095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.397 [2024-07-26 03:50:26.155114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:11.397 [2024-07-26 03:50:26.155128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.397 [2024-07-26 03:50:26.155142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.397 [2024-07-26 03:50:26.155310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.397 [2024-07-26 03:50:26.155337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:11.397 [2024-07-26 03:50:26.155352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.397 [2024-07-26 03:50:26.155365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.397 [2024-07-26 03:50:26.155399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.397 [2024-07-26 03:50:26.155419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:11.397 [2024-07-26 03:50:26.155432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.397 [2024-07-26 03:50:26.155447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.397 [2024-07-26 03:50:26.260231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.397 [2024-07-26 03:50:26.260312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:11.397 [2024-07-26 03:50:26.260334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.397 [2024-07-26 03:50:26.260355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.656 [2024-07-26 03:50:26.344510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.656 [2024-07-26 03:50:26.344573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:11.656 [2024-07-26 03:50:26.344595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.656 [2024-07-26 03:50:26.344609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.656 [2024-07-26 03:50:26.344753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.656 [2024-07-26 03:50:26.344779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:11.656 [2024-07-26 03:50:26.344793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.656 [2024-07-26 03:50:26.344807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.656 [2024-07-26 03:50:26.344915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.656 [2024-07-26 03:50:26.344941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:11.656 [2024-07-26 03:50:26.344955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.656 [2024-07-26 03:50:26.344969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.656 [2024-07-26 03:50:26.345111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.656 [2024-07-26 03:50:26.345146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:11.656 [2024-07-26 03:50:26.345161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.656 [2024-07-26 03:50:26.345176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.656 [2024-07-26 03:50:26.345251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.656 [2024-07-26 03:50:26.345276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:11.656 [2024-07-26 03:50:26.345289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.656 [2024-07-26 03:50:26.345303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.656 [2024-07-26 03:50:26.345359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.656 [2024-07-26 03:50:26.345377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:11.656 [2024-07-26 03:50:26.345392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.656 [2024-07-26 03:50:26.345406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.656 [2024-07-26 03:50:26.345470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.656 [2024-07-26 03:50:26.345494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:11.656 [2024-07-26 03:50:26.345507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.656 [2024-07-26 03:50:26.345520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.656 [2024-07-26 03:50:26.345709] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 456.168 ms, result 0 00:22:11.656 true 00:22:11.656 03:50:26 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 79769 00:22:11.656 03:50:26 ftl.ftl_fio_basic -- common/autotest_common.sh@948 -- # '[' -z 79769 ']' 00:22:11.656 03:50:26 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # kill -0 79769 00:22:11.656 03:50:26 ftl.ftl_fio_basic -- common/autotest_common.sh@953 -- # uname 00:22:11.656 03:50:26 ftl.ftl_fio_basic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:11.656 03:50:26 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79769 00:22:11.656 killing process with pid 79769 00:22:11.656 03:50:26 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:11.656 03:50:26 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:11.656 03:50:26 ftl.ftl_fio_basic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79769' 00:22:11.656 03:50:26 ftl.ftl_fio_basic -- common/autotest_common.sh@967 -- # kill 79769 00:22:11.656 03:50:26 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # wait 79769 00:22:16.925 03:50:30 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:22:16.925 03:50:30 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:22:16.925 03:50:30 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:22:16.925 03:50:30 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:16.926 03:50:30 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:16.926 03:50:31 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:22:16.926 03:50:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:22:16.926 03:50:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:16.926 03:50:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:16.926 03:50:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:16.926 03:50:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:16.926 03:50:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:22:16.926 03:50:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:16.926 03:50:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:16.926 03:50:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:22:16.926 03:50:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:16.926 03:50:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:16.926 03:50:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:16.926 03:50:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:16.926 03:50:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:22:16.926 03:50:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:16.926 03:50:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:22:16.926 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:22:16.926 fio-3.35 00:22:16.926 Starting 1 thread 00:22:22.189 00:22:22.189 test: (groupid=0, jobs=1): err= 0: pid=79979: Fri Jul 26 03:50:36 2024 00:22:22.189 read: IOPS=1045, BW=69.4MiB/s (72.8MB/s)(255MiB/3666msec) 00:22:22.189 slat (nsec): min=6053, max=39480, avg=8006.62, stdev=3148.20 00:22:22.189 clat (usec): min=272, max=955, avg=425.18, stdev=53.08 00:22:22.189 lat (usec): min=279, max=962, avg=433.19, stdev=53.96 00:22:22.189 clat percentiles (usec): 00:22:22.189 | 1.00th=[ 338], 5.00th=[ 363], 10.00th=[ 367], 20.00th=[ 375], 00:22:22.189 | 30.00th=[ 383], 40.00th=[ 400], 50.00th=[ 433], 60.00th=[ 441], 00:22:22.189 | 70.00th=[ 445], 80.00th=[ 457], 90.00th=[ 506], 95.00th=[ 523], 00:22:22.189 | 99.00th=[ 570], 99.50th=[ 586], 99.90th=[ 644], 99.95th=[ 693], 00:22:22.189 | 99.99th=[ 955] 00:22:22.189 write: IOPS=1052, BW=69.9MiB/s (73.3MB/s)(256MiB/3662msec); 0 zone resets 00:22:22.189 slat (nsec): min=20854, max=90701, avg=25285.47, stdev=5122.20 00:22:22.189 clat (usec): min=318, max=899, avg=480.40, stdev=61.29 00:22:22.189 lat (usec): min=360, max=928, avg=505.69, stdev=61.77 00:22:22.189 clat percentiles (usec): 00:22:22.189 | 1.00th=[ 379], 5.00th=[ 400], 10.00th=[ 404], 20.00th=[ 429], 00:22:22.189 | 30.00th=[ 461], 40.00th=[ 465], 50.00th=[ 474], 60.00th=[ 482], 00:22:22.189 | 70.00th=[ 498], 80.00th=[ 529], 90.00th=[ 545], 95.00th=[ 570], 00:22:22.189 | 99.00th=[ 717], 99.50th=[ 758], 99.90th=[ 816], 99.95th=[ 873], 00:22:22.189 | 99.99th=[ 898] 00:22:22.189 bw ( KiB/s): min=67728, max=77520, per=99.74%, avg=71419.43, stdev=3103.69, samples=7 00:22:22.189 iops : min= 996, max= 1140, avg=1050.29, stdev=45.64, samples=7 00:22:22.189 lat (usec) : 500=79.59%, 750=20.09%, 1000=0.31% 00:22:22.189 cpu : usr=99.15%, sys=0.08%, ctx=7, majf=0, minf=1171 00:22:22.189 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:22.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:22.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:22.189 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:22.189 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:22.189 00:22:22.189 Run status group 0 (all jobs): 00:22:22.189 READ: bw=69.4MiB/s (72.8MB/s), 69.4MiB/s-69.4MiB/s (72.8MB/s-72.8MB/s), io=255MiB (267MB), run=3666-3666msec 00:22:22.189 WRITE: bw=69.9MiB/s (73.3MB/s), 69.9MiB/s-69.9MiB/s (73.3MB/s-73.3MB/s), io=256MiB (269MB), run=3662-3662msec 00:22:23.119 ----------------------------------------------------- 00:22:23.119 Suppressions used: 00:22:23.119 count bytes template 00:22:23.119 1 5 /usr/src/fio/parse.c 00:22:23.119 1 8 libtcmalloc_minimal.so 00:22:23.119 1 904 libcrypto.so 00:22:23.119 ----------------------------------------------------- 00:22:23.119 00:22:23.119 03:50:37 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:22:23.119 03:50:37 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:23.119 03:50:37 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:23.119 03:50:37 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:22:23.119 03:50:37 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:22:23.119 03:50:37 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:23.119 03:50:37 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:23.120 03:50:37 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:22:23.120 03:50:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:22:23.120 03:50:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:23.120 03:50:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:23.120 03:50:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:23.120 03:50:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:23.120 03:50:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:22:23.120 03:50:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:23.120 03:50:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:23.120 03:50:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:23.120 03:50:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:23.120 03:50:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:22:23.120 03:50:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:23.120 03:50:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:23.120 03:50:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:22:23.120 03:50:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:23.120 03:50:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:22:23.378 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:22:23.378 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:22:23.378 fio-3.35 00:22:23.378 Starting 2 threads 00:22:55.530 00:22:55.530 first_half: (groupid=0, jobs=1): err= 0: pid=80082: Fri Jul 26 03:51:10 2024 00:22:55.530 read: IOPS=2109, BW=8438KiB/s (8641kB/s)(255MiB/30927msec) 00:22:55.530 slat (nsec): min=4791, max=42521, avg=7679.88, stdev=1803.20 00:22:55.530 clat (usec): min=1053, max=353718, avg=45035.11, stdev=24557.74 00:22:55.530 lat (usec): min=1060, max=353725, avg=45042.79, stdev=24557.96 00:22:55.530 clat percentiles (msec): 00:22:55.530 | 1.00th=[ 11], 5.00th=[ 32], 10.00th=[ 39], 20.00th=[ 40], 00:22:55.530 | 30.00th=[ 40], 40.00th=[ 41], 50.00th=[ 41], 60.00th=[ 42], 00:22:55.530 | 70.00th=[ 44], 80.00th=[ 46], 90.00th=[ 51], 95.00th=[ 58], 00:22:55.530 | 99.00th=[ 186], 99.50th=[ 209], 99.90th=[ 279], 99.95th=[ 305], 00:22:55.530 | 99.99th=[ 338] 00:22:55.530 write: IOPS=2474, BW=9896KiB/s (10.1MB/s)(256MiB/26489msec); 0 zone resets 00:22:55.530 slat (usec): min=6, max=216, avg=10.03, stdev= 4.87 00:22:55.530 clat (usec): min=445, max=128284, avg=15514.75, stdev=26385.24 00:22:55.530 lat (usec): min=452, max=128316, avg=15524.78, stdev=26385.54 00:22:55.530 clat percentiles (usec): 00:22:55.530 | 1.00th=[ 1020], 5.00th=[ 1303], 10.00th=[ 1549], 20.00th=[ 2040], 00:22:55.530 | 30.00th=[ 3916], 40.00th=[ 5735], 50.00th=[ 7308], 60.00th=[ 8291], 00:22:55.530 | 70.00th=[ 9765], 80.00th=[ 14746], 90.00th=[ 43254], 95.00th=[ 98042], 00:22:55.530 | 99.00th=[113771], 99.50th=[116917], 99.90th=[123208], 99.95th=[126354], 00:22:55.530 | 99.99th=[127402] 00:22:55.530 bw ( KiB/s): min= 928, max=39840, per=100.00%, avg=20979.56, stdev=11006.90, samples=25 00:22:55.530 iops : min= 232, max= 9960, avg=5244.92, stdev=2751.83, samples=25 00:22:55.530 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.35% 00:22:55.530 lat (msec) : 2=9.34%, 4=5.71%, 10=20.40%, 20=10.51%, 50=44.48% 00:22:55.530 lat (msec) : 100=5.55%, 250=3.52%, 500=0.08% 00:22:55.530 cpu : usr=99.11%, sys=0.15%, ctx=61, majf=0, minf=5590 00:22:55.530 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:55.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.530 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:55.530 issued rwts: total=65242,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:55.530 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:55.530 second_half: (groupid=0, jobs=1): err= 0: pid=80083: Fri Jul 26 03:51:10 2024 00:22:55.530 read: IOPS=2125, BW=8501KiB/s (8705kB/s)(255MiB/30671msec) 00:22:55.530 slat (nsec): min=4801, max=51735, avg=7638.21, stdev=1898.33 00:22:55.530 clat (usec): min=1069, max=357856, avg=46256.43, stdev=23670.33 00:22:55.530 lat (usec): min=1078, max=357868, avg=46264.07, stdev=23670.50 00:22:55.530 clat percentiles (msec): 00:22:55.530 | 1.00th=[ 10], 5.00th=[ 39], 10.00th=[ 39], 20.00th=[ 40], 00:22:55.530 | 30.00th=[ 40], 40.00th=[ 41], 50.00th=[ 41], 60.00th=[ 42], 00:22:55.530 | 70.00th=[ 45], 80.00th=[ 47], 90.00th=[ 52], 95.00th=[ 63], 00:22:55.530 | 99.00th=[ 176], 99.50th=[ 207], 99.90th=[ 243], 99.95th=[ 275], 00:22:55.530 | 99.99th=[ 334] 00:22:55.530 write: IOPS=2742, BW=10.7MiB/s (11.2MB/s)(256MiB/23900msec); 0 zone resets 00:22:55.530 slat (usec): min=6, max=385, avg= 9.69, stdev= 5.22 00:22:55.530 clat (usec): min=498, max=127921, avg=13867.28, stdev=25688.67 00:22:55.530 lat (usec): min=512, max=127928, avg=13876.97, stdev=25688.79 00:22:55.530 clat percentiles (usec): 00:22:55.530 | 1.00th=[ 1106], 5.00th=[ 1401], 10.00th=[ 1598], 20.00th=[ 1926], 00:22:55.530 | 30.00th=[ 2474], 40.00th=[ 4015], 50.00th=[ 5342], 60.00th=[ 6718], 00:22:55.530 | 70.00th=[ 8291], 80.00th=[ 14877], 90.00th=[ 19268], 95.00th=[ 96994], 00:22:55.530 | 99.00th=[111674], 99.50th=[115868], 99.90th=[124257], 99.95th=[126354], 00:22:55.530 | 99.99th=[127402] 00:22:55.530 bw ( KiB/s): min= 832, max=44912, per=100.00%, avg=20172.12, stdev=11783.82, samples=26 00:22:55.530 iops : min= 208, max=11228, avg=5043.00, stdev=2945.99, samples=26 00:22:55.530 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.15% 00:22:55.530 lat (msec) : 2=10.77%, 4=9.31%, 10=16.97%, 20=8.86%, 50=44.16% 00:22:55.530 lat (msec) : 100=6.15%, 250=3.57%, 500=0.04% 00:22:55.530 cpu : usr=99.10%, sys=0.14%, ctx=43, majf=0, minf=5527 00:22:55.530 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:55.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.530 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:55.530 issued rwts: total=65184,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:55.530 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:55.530 00:22:55.530 Run status group 0 (all jobs): 00:22:55.530 READ: bw=16.5MiB/s (17.3MB/s), 8438KiB/s-8501KiB/s (8641kB/s-8705kB/s), io=509MiB (534MB), run=30671-30927msec 00:22:55.530 WRITE: bw=19.3MiB/s (20.3MB/s), 9896KiB/s-10.7MiB/s (10.1MB/s-11.2MB/s), io=512MiB (537MB), run=23900-26489msec 00:22:58.060 ----------------------------------------------------- 00:22:58.060 Suppressions used: 00:22:58.060 count bytes template 00:22:58.060 2 10 /usr/src/fio/parse.c 00:22:58.060 2 192 /usr/src/fio/iolog.c 00:22:58.060 1 8 libtcmalloc_minimal.so 00:22:58.060 1 904 libcrypto.so 00:22:58.060 ----------------------------------------------------- 00:22:58.060 00:22:58.060 03:51:12 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:22:58.060 03:51:12 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:58.060 03:51:12 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:58.060 03:51:12 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:22:58.060 03:51:12 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:22:58.060 03:51:12 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:58.060 03:51:12 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:58.060 03:51:12 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:22:58.060 03:51:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:22:58.060 03:51:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:58.060 03:51:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:58.060 03:51:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:58.060 03:51:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:58.060 03:51:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:22:58.060 03:51:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:58.060 03:51:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:58.060 03:51:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:58.060 03:51:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:22:58.060 03:51:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:58.060 03:51:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:58.061 03:51:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:58.061 03:51:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:22:58.061 03:51:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:58.061 03:51:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:22:58.061 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:22:58.061 fio-3.35 00:22:58.061 Starting 1 thread 00:23:16.139 00:23:16.139 test: (groupid=0, jobs=1): err= 0: pid=80460: Fri Jul 26 03:51:30 2024 00:23:16.139 read: IOPS=6460, BW=25.2MiB/s (26.5MB/s)(255MiB/10093msec) 00:23:16.139 slat (nsec): min=4816, max=37292, avg=6888.92, stdev=1830.61 00:23:16.139 clat (usec): min=745, max=39053, avg=19802.90, stdev=1405.61 00:23:16.139 lat (usec): min=750, max=39062, avg=19809.78, stdev=1405.66 00:23:16.139 clat percentiles (usec): 00:23:16.139 | 1.00th=[18482], 5.00th=[18744], 10.00th=[18744], 20.00th=[19006], 00:23:16.139 | 30.00th=[19006], 40.00th=[19268], 50.00th=[19530], 60.00th=[19792], 00:23:16.139 | 70.00th=[20055], 80.00th=[20317], 90.00th=[20841], 95.00th=[21365], 00:23:16.139 | 99.00th=[26084], 99.50th=[28967], 99.90th=[30016], 99.95th=[34341], 00:23:16.139 | 99.99th=[38011] 00:23:16.139 write: IOPS=10.6k, BW=41.5MiB/s (43.5MB/s)(256MiB/6165msec); 0 zone resets 00:23:16.139 slat (usec): min=6, max=745, avg= 9.40, stdev= 5.18 00:23:16.139 clat (usec): min=742, max=67593, avg=11977.29, stdev=14987.23 00:23:16.139 lat (usec): min=751, max=67604, avg=11986.69, stdev=14987.25 00:23:16.139 clat percentiles (usec): 00:23:16.139 | 1.00th=[ 1057], 5.00th=[ 1270], 10.00th=[ 1401], 20.00th=[ 1614], 00:23:16.139 | 30.00th=[ 1844], 40.00th=[ 2442], 50.00th=[ 7635], 60.00th=[ 8979], 00:23:16.139 | 70.00th=[10552], 80.00th=[12780], 90.00th=[43779], 95.00th=[46400], 00:23:16.139 | 99.00th=[51119], 99.50th=[52691], 99.90th=[55837], 99.95th=[56886], 00:23:16.139 | 99.99th=[63701] 00:23:16.139 bw ( KiB/s): min=11552, max=62176, per=94.84%, avg=40329.85, stdev=11813.78, samples=13 00:23:16.139 iops : min= 2888, max=15544, avg=10082.46, stdev=2953.45, samples=13 00:23:16.139 lat (usec) : 750=0.01%, 1000=0.29% 00:23:16.139 lat (msec) : 2=17.11%, 4=3.52%, 10=12.63%, 20=41.43%, 50=24.20% 00:23:16.139 lat (msec) : 100=0.82% 00:23:16.139 cpu : usr=98.94%, sys=0.25%, ctx=25, majf=0, minf=5567 00:23:16.139 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:23:16.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:16.139 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:16.139 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:16.139 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:16.139 00:23:16.139 Run status group 0 (all jobs): 00:23:16.139 READ: bw=25.2MiB/s (26.5MB/s), 25.2MiB/s-25.2MiB/s (26.5MB/s-26.5MB/s), io=255MiB (267MB), run=10093-10093msec 00:23:16.139 WRITE: bw=41.5MiB/s (43.5MB/s), 41.5MiB/s-41.5MiB/s (43.5MB/s-43.5MB/s), io=256MiB (268MB), run=6165-6165msec 00:23:17.513 ----------------------------------------------------- 00:23:17.513 Suppressions used: 00:23:17.513 count bytes template 00:23:17.513 1 5 /usr/src/fio/parse.c 00:23:17.513 2 192 /usr/src/fio/iolog.c 00:23:17.513 1 8 libtcmalloc_minimal.so 00:23:17.513 1 904 libcrypto.so 00:23:17.513 ----------------------------------------------------- 00:23:17.513 00:23:17.513 03:51:32 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:23:17.513 03:51:32 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:17.513 03:51:32 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:17.513 03:51:32 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:17.513 Remove shared memory files 00:23:17.513 03:51:32 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:23:17.513 03:51:32 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:17.513 03:51:32 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:23:17.513 03:51:32 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:23:17.513 03:51:32 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid62513 /dev/shm/spdk_tgt_trace.pid78710 00:23:17.513 03:51:32 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:17.513 03:51:32 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:23:17.513 ************************************ 00:23:17.513 END TEST ftl_fio_basic 00:23:17.513 ************************************ 00:23:17.513 00:23:17.513 real 1m15.865s 00:23:17.513 user 2m51.079s 00:23:17.513 sys 0m3.718s 00:23:17.513 03:51:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:17.513 03:51:32 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:17.513 03:51:32 ftl -- common/autotest_common.sh@1142 -- # return 0 00:23:17.513 03:51:32 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:23:17.513 03:51:32 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:23:17.513 03:51:32 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:17.513 03:51:32 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:17.513 ************************************ 00:23:17.513 START TEST ftl_bdevperf 00:23:17.513 ************************************ 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:23:17.513 * Looking for test storage... 00:23:17.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@19 -- # bdevperf_pid=80716 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # waitforlisten 80716 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 80716 ']' 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:17.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:17.513 03:51:32 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:17.771 [2024-07-26 03:51:32.456989] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:23:17.771 [2024-07-26 03:51:32.457148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80716 ] 00:23:17.771 [2024-07-26 03:51:32.623188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.029 [2024-07-26 03:51:32.848318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.595 03:51:33 ftl.ftl_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:18.595 03:51:33 ftl.ftl_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:23:18.595 03:51:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:18.595 03:51:33 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:23:18.595 03:51:33 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:18.595 03:51:33 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:23:18.595 03:51:33 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:23:18.595 03:51:33 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:19.161 03:51:33 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:19.161 03:51:33 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:23:19.161 03:51:33 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:19.161 03:51:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:23:19.161 03:51:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:19.161 03:51:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:23:19.161 03:51:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:23:19.161 03:51:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:19.420 03:51:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:19.420 { 00:23:19.420 "name": "nvme0n1", 00:23:19.420 "aliases": [ 00:23:19.420 "4831cd3b-7f30-4f9a-816c-a167a3d775ed" 00:23:19.420 ], 00:23:19.420 "product_name": "NVMe disk", 00:23:19.420 "block_size": 4096, 00:23:19.420 "num_blocks": 1310720, 00:23:19.420 "uuid": "4831cd3b-7f30-4f9a-816c-a167a3d775ed", 00:23:19.420 "assigned_rate_limits": { 00:23:19.420 "rw_ios_per_sec": 0, 00:23:19.420 "rw_mbytes_per_sec": 0, 00:23:19.420 "r_mbytes_per_sec": 0, 00:23:19.420 "w_mbytes_per_sec": 0 00:23:19.420 }, 00:23:19.420 "claimed": true, 00:23:19.420 "claim_type": "read_many_write_one", 00:23:19.420 "zoned": false, 00:23:19.420 "supported_io_types": { 00:23:19.420 "read": true, 00:23:19.420 "write": true, 00:23:19.420 "unmap": true, 00:23:19.420 "flush": true, 00:23:19.420 "reset": true, 00:23:19.420 "nvme_admin": true, 00:23:19.420 "nvme_io": true, 00:23:19.420 "nvme_io_md": false, 00:23:19.420 "write_zeroes": true, 00:23:19.420 "zcopy": false, 00:23:19.420 "get_zone_info": false, 00:23:19.420 "zone_management": false, 00:23:19.420 "zone_append": false, 00:23:19.420 "compare": true, 00:23:19.420 "compare_and_write": false, 00:23:19.420 "abort": true, 00:23:19.420 "seek_hole": false, 00:23:19.420 "seek_data": false, 00:23:19.420 "copy": true, 00:23:19.420 "nvme_iov_md": false 00:23:19.420 }, 00:23:19.420 "driver_specific": { 00:23:19.420 "nvme": [ 00:23:19.420 { 00:23:19.420 "pci_address": "0000:00:11.0", 00:23:19.420 "trid": { 00:23:19.420 "trtype": "PCIe", 00:23:19.420 "traddr": "0000:00:11.0" 00:23:19.420 }, 00:23:19.420 "ctrlr_data": { 00:23:19.420 "cntlid": 0, 00:23:19.420 "vendor_id": "0x1b36", 00:23:19.420 "model_number": "QEMU NVMe Ctrl", 00:23:19.420 "serial_number": "12341", 00:23:19.420 "firmware_revision": "8.0.0", 00:23:19.420 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:19.420 "oacs": { 00:23:19.420 "security": 0, 00:23:19.420 "format": 1, 00:23:19.420 "firmware": 0, 00:23:19.420 "ns_manage": 1 00:23:19.420 }, 00:23:19.420 "multi_ctrlr": false, 00:23:19.420 "ana_reporting": false 00:23:19.420 }, 00:23:19.420 "vs": { 00:23:19.420 "nvme_version": "1.4" 00:23:19.420 }, 00:23:19.420 "ns_data": { 00:23:19.420 "id": 1, 00:23:19.420 "can_share": false 00:23:19.420 } 00:23:19.420 } 00:23:19.420 ], 00:23:19.420 "mp_policy": "active_passive" 00:23:19.420 } 00:23:19.420 } 00:23:19.420 ]' 00:23:19.420 03:51:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:19.420 03:51:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:23:19.420 03:51:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:19.420 03:51:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:23:19.420 03:51:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:23:19.420 03:51:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:23:19.420 03:51:34 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:23:19.420 03:51:34 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:19.420 03:51:34 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:23:19.420 03:51:34 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:19.420 03:51:34 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:19.678 03:51:34 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=03485813-7c97-471a-a5eb-cc265ee2a9d5 00:23:19.678 03:51:34 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:23:19.678 03:51:34 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 03485813-7c97-471a-a5eb-cc265ee2a9d5 00:23:19.937 03:51:34 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:20.195 03:51:35 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=464a316e-1430-4300-9c84-ef6b5051e040 00:23:20.196 03:51:35 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 464a316e-1430-4300-9c84-ef6b5051e040 00:23:20.763 03:51:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # split_bdev=618b8d33-fca9-473d-840f-fc8bf100f129 00:23:20.763 03:51:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:10.0 618b8d33-fca9-473d-840f-fc8bf100f129 00:23:20.763 03:51:35 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:23:20.763 03:51:35 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:20.763 03:51:35 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=618b8d33-fca9-473d-840f-fc8bf100f129 00:23:20.763 03:51:35 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:23:20.763 03:51:35 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 618b8d33-fca9-473d-840f-fc8bf100f129 00:23:20.763 03:51:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=618b8d33-fca9-473d-840f-fc8bf100f129 00:23:20.763 03:51:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:20.763 03:51:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:23:20.763 03:51:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:23:20.763 03:51:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 618b8d33-fca9-473d-840f-fc8bf100f129 00:23:20.763 03:51:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:20.763 { 00:23:20.763 "name": "618b8d33-fca9-473d-840f-fc8bf100f129", 00:23:20.763 "aliases": [ 00:23:20.763 "lvs/nvme0n1p0" 00:23:20.763 ], 00:23:20.763 "product_name": "Logical Volume", 00:23:20.763 "block_size": 4096, 00:23:20.763 "num_blocks": 26476544, 00:23:20.763 "uuid": "618b8d33-fca9-473d-840f-fc8bf100f129", 00:23:20.763 "assigned_rate_limits": { 00:23:20.763 "rw_ios_per_sec": 0, 00:23:20.763 "rw_mbytes_per_sec": 0, 00:23:20.763 "r_mbytes_per_sec": 0, 00:23:20.763 "w_mbytes_per_sec": 0 00:23:20.763 }, 00:23:20.763 "claimed": false, 00:23:20.763 "zoned": false, 00:23:20.763 "supported_io_types": { 00:23:20.763 "read": true, 00:23:20.763 "write": true, 00:23:20.763 "unmap": true, 00:23:20.763 "flush": false, 00:23:20.763 "reset": true, 00:23:20.763 "nvme_admin": false, 00:23:20.763 "nvme_io": false, 00:23:20.763 "nvme_io_md": false, 00:23:20.763 "write_zeroes": true, 00:23:20.763 "zcopy": false, 00:23:20.763 "get_zone_info": false, 00:23:20.763 "zone_management": false, 00:23:20.763 "zone_append": false, 00:23:20.763 "compare": false, 00:23:20.763 "compare_and_write": false, 00:23:20.763 "abort": false, 00:23:20.763 "seek_hole": true, 00:23:20.763 "seek_data": true, 00:23:20.763 "copy": false, 00:23:20.763 "nvme_iov_md": false 00:23:20.763 }, 00:23:20.763 "driver_specific": { 00:23:20.763 "lvol": { 00:23:20.763 "lvol_store_uuid": "464a316e-1430-4300-9c84-ef6b5051e040", 00:23:20.763 "base_bdev": "nvme0n1", 00:23:20.763 "thin_provision": true, 00:23:20.763 "num_allocated_clusters": 0, 00:23:20.763 "snapshot": false, 00:23:20.763 "clone": false, 00:23:20.763 "esnap_clone": false 00:23:20.763 } 00:23:20.763 } 00:23:20.763 } 00:23:20.763 ]' 00:23:20.763 03:51:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:21.021 03:51:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:23:21.021 03:51:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:21.021 03:51:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:21.021 03:51:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:21.021 03:51:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:23:21.021 03:51:35 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:23:21.021 03:51:35 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:23:21.021 03:51:35 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:21.279 03:51:36 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:21.279 03:51:36 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:21.279 03:51:36 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 618b8d33-fca9-473d-840f-fc8bf100f129 00:23:21.279 03:51:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=618b8d33-fca9-473d-840f-fc8bf100f129 00:23:21.279 03:51:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:21.279 03:51:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:23:21.279 03:51:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:23:21.279 03:51:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 618b8d33-fca9-473d-840f-fc8bf100f129 00:23:21.548 03:51:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:21.548 { 00:23:21.548 "name": "618b8d33-fca9-473d-840f-fc8bf100f129", 00:23:21.548 "aliases": [ 00:23:21.548 "lvs/nvme0n1p0" 00:23:21.548 ], 00:23:21.548 "product_name": "Logical Volume", 00:23:21.548 "block_size": 4096, 00:23:21.548 "num_blocks": 26476544, 00:23:21.548 "uuid": "618b8d33-fca9-473d-840f-fc8bf100f129", 00:23:21.548 "assigned_rate_limits": { 00:23:21.548 "rw_ios_per_sec": 0, 00:23:21.548 "rw_mbytes_per_sec": 0, 00:23:21.548 "r_mbytes_per_sec": 0, 00:23:21.548 "w_mbytes_per_sec": 0 00:23:21.548 }, 00:23:21.548 "claimed": false, 00:23:21.548 "zoned": false, 00:23:21.548 "supported_io_types": { 00:23:21.548 "read": true, 00:23:21.548 "write": true, 00:23:21.548 "unmap": true, 00:23:21.548 "flush": false, 00:23:21.548 "reset": true, 00:23:21.548 "nvme_admin": false, 00:23:21.548 "nvme_io": false, 00:23:21.548 "nvme_io_md": false, 00:23:21.548 "write_zeroes": true, 00:23:21.548 "zcopy": false, 00:23:21.548 "get_zone_info": false, 00:23:21.548 "zone_management": false, 00:23:21.548 "zone_append": false, 00:23:21.548 "compare": false, 00:23:21.548 "compare_and_write": false, 00:23:21.548 "abort": false, 00:23:21.548 "seek_hole": true, 00:23:21.548 "seek_data": true, 00:23:21.548 "copy": false, 00:23:21.548 "nvme_iov_md": false 00:23:21.548 }, 00:23:21.548 "driver_specific": { 00:23:21.548 "lvol": { 00:23:21.548 "lvol_store_uuid": "464a316e-1430-4300-9c84-ef6b5051e040", 00:23:21.548 "base_bdev": "nvme0n1", 00:23:21.548 "thin_provision": true, 00:23:21.548 "num_allocated_clusters": 0, 00:23:21.548 "snapshot": false, 00:23:21.548 "clone": false, 00:23:21.548 "esnap_clone": false 00:23:21.548 } 00:23:21.548 } 00:23:21.548 } 00:23:21.548 ]' 00:23:21.548 03:51:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:21.548 03:51:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:23:21.548 03:51:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:21.819 03:51:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:21.819 03:51:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:21.819 03:51:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:23:21.819 03:51:36 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:23:21.819 03:51:36 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:22.078 03:51:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:23:22.078 03:51:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # get_bdev_size 618b8d33-fca9-473d-840f-fc8bf100f129 00:23:22.078 03:51:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=618b8d33-fca9-473d-840f-fc8bf100f129 00:23:22.078 03:51:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:22.078 03:51:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:23:22.078 03:51:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:23:22.078 03:51:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 618b8d33-fca9-473d-840f-fc8bf100f129 00:23:22.337 03:51:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:22.337 { 00:23:22.337 "name": "618b8d33-fca9-473d-840f-fc8bf100f129", 00:23:22.337 "aliases": [ 00:23:22.337 "lvs/nvme0n1p0" 00:23:22.337 ], 00:23:22.337 "product_name": "Logical Volume", 00:23:22.337 "block_size": 4096, 00:23:22.337 "num_blocks": 26476544, 00:23:22.337 "uuid": "618b8d33-fca9-473d-840f-fc8bf100f129", 00:23:22.337 "assigned_rate_limits": { 00:23:22.337 "rw_ios_per_sec": 0, 00:23:22.337 "rw_mbytes_per_sec": 0, 00:23:22.337 "r_mbytes_per_sec": 0, 00:23:22.337 "w_mbytes_per_sec": 0 00:23:22.337 }, 00:23:22.337 "claimed": false, 00:23:22.337 "zoned": false, 00:23:22.337 "supported_io_types": { 00:23:22.337 "read": true, 00:23:22.337 "write": true, 00:23:22.337 "unmap": true, 00:23:22.337 "flush": false, 00:23:22.337 "reset": true, 00:23:22.337 "nvme_admin": false, 00:23:22.337 "nvme_io": false, 00:23:22.337 "nvme_io_md": false, 00:23:22.337 "write_zeroes": true, 00:23:22.337 "zcopy": false, 00:23:22.337 "get_zone_info": false, 00:23:22.337 "zone_management": false, 00:23:22.337 "zone_append": false, 00:23:22.337 "compare": false, 00:23:22.337 "compare_and_write": false, 00:23:22.337 "abort": false, 00:23:22.337 "seek_hole": true, 00:23:22.337 "seek_data": true, 00:23:22.337 "copy": false, 00:23:22.337 "nvme_iov_md": false 00:23:22.337 }, 00:23:22.337 "driver_specific": { 00:23:22.337 "lvol": { 00:23:22.337 "lvol_store_uuid": "464a316e-1430-4300-9c84-ef6b5051e040", 00:23:22.337 "base_bdev": "nvme0n1", 00:23:22.337 "thin_provision": true, 00:23:22.337 "num_allocated_clusters": 0, 00:23:22.337 "snapshot": false, 00:23:22.337 "clone": false, 00:23:22.337 "esnap_clone": false 00:23:22.337 } 00:23:22.337 } 00:23:22.337 } 00:23:22.337 ]' 00:23:22.337 03:51:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:22.337 03:51:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:23:22.337 03:51:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:22.337 03:51:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:22.337 03:51:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:22.337 03:51:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:23:22.337 03:51:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:23:22.337 03:51:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 618b8d33-fca9-473d-840f-fc8bf100f129 -c nvc0n1p0 --l2p_dram_limit 20 00:23:22.597 [2024-07-26 03:51:37.434879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.597 [2024-07-26 03:51:37.434949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:22.597 [2024-07-26 03:51:37.434975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:22.597 [2024-07-26 03:51:37.434989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.597 [2024-07-26 03:51:37.435069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.597 [2024-07-26 03:51:37.435089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:22.597 [2024-07-26 03:51:37.435109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:23:22.597 [2024-07-26 03:51:37.435122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.597 [2024-07-26 03:51:37.435154] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:22.597 [2024-07-26 03:51:37.436135] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:22.597 [2024-07-26 03:51:37.436180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.597 [2024-07-26 03:51:37.436196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:22.597 [2024-07-26 03:51:37.436212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.030 ms 00:23:22.597 [2024-07-26 03:51:37.436226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.597 [2024-07-26 03:51:37.436440] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a77f0df0-e869-46fd-8d1d-1099c6b5c8dc 00:23:22.597 [2024-07-26 03:51:37.437502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.597 [2024-07-26 03:51:37.437549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:22.597 [2024-07-26 03:51:37.437570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:23:22.597 [2024-07-26 03:51:37.437585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.597 [2024-07-26 03:51:37.442305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.597 [2024-07-26 03:51:37.442359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:22.597 [2024-07-26 03:51:37.442377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.667 ms 00:23:22.597 [2024-07-26 03:51:37.442393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.597 [2024-07-26 03:51:37.442531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.597 [2024-07-26 03:51:37.442569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:22.597 [2024-07-26 03:51:37.442586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:23:22.597 [2024-07-26 03:51:37.442604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.597 [2024-07-26 03:51:37.442694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.597 [2024-07-26 03:51:37.442718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:22.597 [2024-07-26 03:51:37.442732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:22.597 [2024-07-26 03:51:37.442747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.597 [2024-07-26 03:51:37.442779] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:22.597 [2024-07-26 03:51:37.447348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.597 [2024-07-26 03:51:37.447392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:22.597 [2024-07-26 03:51:37.447414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.575 ms 00:23:22.597 [2024-07-26 03:51:37.447428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.597 [2024-07-26 03:51:37.447478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.597 [2024-07-26 03:51:37.447496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:22.597 [2024-07-26 03:51:37.447511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:22.597 [2024-07-26 03:51:37.447524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.597 [2024-07-26 03:51:37.447593] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:22.597 [2024-07-26 03:51:37.447758] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:22.597 [2024-07-26 03:51:37.447783] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:22.597 [2024-07-26 03:51:37.447799] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:22.597 [2024-07-26 03:51:37.447831] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:22.597 [2024-07-26 03:51:37.447849] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:22.597 [2024-07-26 03:51:37.447864] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:22.597 [2024-07-26 03:51:37.447876] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:22.597 [2024-07-26 03:51:37.447892] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:22.597 [2024-07-26 03:51:37.447904] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:22.597 [2024-07-26 03:51:37.447919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.597 [2024-07-26 03:51:37.447932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:22.597 [2024-07-26 03:51:37.447951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:23:22.597 [2024-07-26 03:51:37.447964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.597 [2024-07-26 03:51:37.448060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.597 [2024-07-26 03:51:37.448076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:22.597 [2024-07-26 03:51:37.448092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:23:22.597 [2024-07-26 03:51:37.448105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.597 [2024-07-26 03:51:37.448210] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:22.597 [2024-07-26 03:51:37.448227] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:22.597 [2024-07-26 03:51:37.448243] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:22.597 [2024-07-26 03:51:37.448258] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.597 [2024-07-26 03:51:37.448272] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:22.597 [2024-07-26 03:51:37.448284] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:22.597 [2024-07-26 03:51:37.448297] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:22.597 [2024-07-26 03:51:37.448309] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:22.597 [2024-07-26 03:51:37.448322] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:22.597 [2024-07-26 03:51:37.448334] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:22.597 [2024-07-26 03:51:37.448347] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:22.597 [2024-07-26 03:51:37.448359] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:22.597 [2024-07-26 03:51:37.448372] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:22.597 [2024-07-26 03:51:37.448383] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:22.597 [2024-07-26 03:51:37.448398] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:22.597 [2024-07-26 03:51:37.448409] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.597 [2024-07-26 03:51:37.448424] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:22.597 [2024-07-26 03:51:37.448436] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:22.597 [2024-07-26 03:51:37.448465] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.597 [2024-07-26 03:51:37.448478] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:22.597 [2024-07-26 03:51:37.448491] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:22.598 [2024-07-26 03:51:37.448502] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:22.598 [2024-07-26 03:51:37.448516] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:22.598 [2024-07-26 03:51:37.448527] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:22.598 [2024-07-26 03:51:37.448540] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:22.598 [2024-07-26 03:51:37.448552] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:22.598 [2024-07-26 03:51:37.448565] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:22.598 [2024-07-26 03:51:37.448576] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:22.598 [2024-07-26 03:51:37.448589] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:22.598 [2024-07-26 03:51:37.448601] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:22.598 [2024-07-26 03:51:37.448614] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:22.598 [2024-07-26 03:51:37.448625] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:22.598 [2024-07-26 03:51:37.448641] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:22.598 [2024-07-26 03:51:37.448653] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:22.598 [2024-07-26 03:51:37.448666] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:22.598 [2024-07-26 03:51:37.448677] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:22.598 [2024-07-26 03:51:37.448690] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:22.598 [2024-07-26 03:51:37.448701] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:22.598 [2024-07-26 03:51:37.448717] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:22.598 [2024-07-26 03:51:37.448728] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.598 [2024-07-26 03:51:37.448741] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:22.598 [2024-07-26 03:51:37.448753] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:22.598 [2024-07-26 03:51:37.448765] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.598 [2024-07-26 03:51:37.448777] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:22.598 [2024-07-26 03:51:37.448791] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:22.598 [2024-07-26 03:51:37.448803] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:22.598 [2024-07-26 03:51:37.448830] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.598 [2024-07-26 03:51:37.448846] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:22.598 [2024-07-26 03:51:37.448863] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:22.598 [2024-07-26 03:51:37.448874] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:22.598 [2024-07-26 03:51:37.448889] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:22.598 [2024-07-26 03:51:37.448900] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:22.598 [2024-07-26 03:51:37.448914] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:22.598 [2024-07-26 03:51:37.448929] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:22.598 [2024-07-26 03:51:37.448947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:22.598 [2024-07-26 03:51:37.448962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:22.598 [2024-07-26 03:51:37.448976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:22.598 [2024-07-26 03:51:37.448989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:22.598 [2024-07-26 03:51:37.449003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:22.598 [2024-07-26 03:51:37.449016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:22.598 [2024-07-26 03:51:37.449030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:22.598 [2024-07-26 03:51:37.449043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:22.598 [2024-07-26 03:51:37.449057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:22.598 [2024-07-26 03:51:37.449069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:22.598 [2024-07-26 03:51:37.449087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:22.598 [2024-07-26 03:51:37.449100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:22.598 [2024-07-26 03:51:37.449115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:22.598 [2024-07-26 03:51:37.449127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:22.598 [2024-07-26 03:51:37.449142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:22.598 [2024-07-26 03:51:37.449154] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:22.598 [2024-07-26 03:51:37.449170] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:22.598 [2024-07-26 03:51:37.449184] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:22.598 [2024-07-26 03:51:37.449198] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:22.598 [2024-07-26 03:51:37.449211] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:22.598 [2024-07-26 03:51:37.449226] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:22.598 [2024-07-26 03:51:37.449240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.598 [2024-07-26 03:51:37.449259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:22.598 [2024-07-26 03:51:37.449272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.101 ms 00:23:22.598 [2024-07-26 03:51:37.449286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.598 [2024-07-26 03:51:37.449333] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:22.598 [2024-07-26 03:51:37.449356] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:25.127 [2024-07-26 03:51:39.525295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.127 [2024-07-26 03:51:39.525380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:25.127 [2024-07-26 03:51:39.525419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2075.974 ms 00:23:25.127 [2024-07-26 03:51:39.525436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.127 [2024-07-26 03:51:39.564368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.127 [2024-07-26 03:51:39.564449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:25.127 [2024-07-26 03:51:39.564474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.587 ms 00:23:25.127 [2024-07-26 03:51:39.564491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.127 [2024-07-26 03:51:39.564693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.127 [2024-07-26 03:51:39.564720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:25.127 [2024-07-26 03:51:39.564736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:23:25.127 [2024-07-26 03:51:39.564754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.127 [2024-07-26 03:51:39.603735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.127 [2024-07-26 03:51:39.603806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:25.127 [2024-07-26 03:51:39.603843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.923 ms 00:23:25.127 [2024-07-26 03:51:39.603861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.127 [2024-07-26 03:51:39.603920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.127 [2024-07-26 03:51:39.603941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:25.127 [2024-07-26 03:51:39.603955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:25.127 [2024-07-26 03:51:39.603970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.127 [2024-07-26 03:51:39.604379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.127 [2024-07-26 03:51:39.604403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:25.127 [2024-07-26 03:51:39.604418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:23:25.127 [2024-07-26 03:51:39.604433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.127 [2024-07-26 03:51:39.604583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.127 [2024-07-26 03:51:39.604606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:25.127 [2024-07-26 03:51:39.604623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:23:25.127 [2024-07-26 03:51:39.604640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.127 [2024-07-26 03:51:39.621020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.127 [2024-07-26 03:51:39.621102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:25.127 [2024-07-26 03:51:39.621124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.351 ms 00:23:25.127 [2024-07-26 03:51:39.621140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.127 [2024-07-26 03:51:39.634765] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:23:25.127 [2024-07-26 03:51:39.639777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.127 [2024-07-26 03:51:39.639828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:25.127 [2024-07-26 03:51:39.639853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.488 ms 00:23:25.127 [2024-07-26 03:51:39.639867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.127 [2024-07-26 03:51:39.702224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.127 [2024-07-26 03:51:39.702310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:25.127 [2024-07-26 03:51:39.702337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.300 ms 00:23:25.127 [2024-07-26 03:51:39.702351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.127 [2024-07-26 03:51:39.702608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.127 [2024-07-26 03:51:39.702631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:25.127 [2024-07-26 03:51:39.702650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:23:25.127 [2024-07-26 03:51:39.702664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.127 [2024-07-26 03:51:39.734153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.127 [2024-07-26 03:51:39.734212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:25.127 [2024-07-26 03:51:39.734234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.401 ms 00:23:25.127 [2024-07-26 03:51:39.734249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.127 [2024-07-26 03:51:39.765176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.127 [2024-07-26 03:51:39.765246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:25.127 [2024-07-26 03:51:39.765271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.864 ms 00:23:25.127 [2024-07-26 03:51:39.765284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.127 [2024-07-26 03:51:39.766050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.127 [2024-07-26 03:51:39.766083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:25.127 [2024-07-26 03:51:39.766102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.703 ms 00:23:25.127 [2024-07-26 03:51:39.766116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.127 [2024-07-26 03:51:39.855063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.127 [2024-07-26 03:51:39.855136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:25.127 [2024-07-26 03:51:39.855166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.865 ms 00:23:25.127 [2024-07-26 03:51:39.855180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.127 [2024-07-26 03:51:39.887776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.127 [2024-07-26 03:51:39.887845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:25.127 [2024-07-26 03:51:39.887869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.538 ms 00:23:25.127 [2024-07-26 03:51:39.887888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.128 [2024-07-26 03:51:39.919508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.128 [2024-07-26 03:51:39.919563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:25.128 [2024-07-26 03:51:39.919585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.566 ms 00:23:25.128 [2024-07-26 03:51:39.919599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.128 [2024-07-26 03:51:39.951285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.128 [2024-07-26 03:51:39.951337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:25.128 [2024-07-26 03:51:39.951360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.630 ms 00:23:25.128 [2024-07-26 03:51:39.951373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.128 [2024-07-26 03:51:39.951432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.128 [2024-07-26 03:51:39.951452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:25.128 [2024-07-26 03:51:39.951472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:25.128 [2024-07-26 03:51:39.951485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.128 [2024-07-26 03:51:39.951607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.128 [2024-07-26 03:51:39.951628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:25.128 [2024-07-26 03:51:39.951644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:23:25.128 [2024-07-26 03:51:39.951660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.128 [2024-07-26 03:51:39.952700] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2517.356 ms, result 0 00:23:25.128 { 00:23:25.128 "name": "ftl0", 00:23:25.128 "uuid": "a77f0df0-e869-46fd-8d1d-1099c6b5c8dc" 00:23:25.128 } 00:23:25.128 03:51:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:23:25.128 03:51:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # jq -r .name 00:23:25.128 03:51:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:23:25.386 03:51:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:23:25.644 [2024-07-26 03:51:40.373202] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:23:25.644 I/O size of 69632 is greater than zero copy threshold (65536). 00:23:25.644 Zero copy mechanism will not be used. 00:23:25.644 Running I/O for 4 seconds... 00:23:29.830 00:23:29.830 Latency(us) 00:23:29.830 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.830 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:23:29.830 ftl0 : 4.00 1887.79 125.36 0.00 0.00 552.39 228.07 21805.61 00:23:29.830 =================================================================================================================== 00:23:29.830 Total : 1887.79 125.36 0.00 0.00 552.39 228.07 21805.61 00:23:29.830 [2024-07-26 03:51:44.383101] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:23:29.830 0 00:23:29.830 03:51:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:23:29.830 [2024-07-26 03:51:44.522141] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:23:29.830 Running I/O for 4 seconds... 00:23:34.013 00:23:34.013 Latency(us) 00:23:34.013 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.013 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:23:34.013 ftl0 : 4.02 7313.33 28.57 0.00 0.00 17458.37 329.54 59101.56 00:23:34.013 =================================================================================================================== 00:23:34.013 Total : 7313.33 28.57 0.00 0.00 17458.37 0.00 59101.56 00:23:34.013 [2024-07-26 03:51:48.550027] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:23:34.013 0 00:23:34.013 03:51:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:23:34.013 [2024-07-26 03:51:48.679823] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:23:34.013 Running I/O for 4 seconds... 00:23:38.225 00:23:38.225 Latency(us) 00:23:38.225 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.225 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:38.225 Verification LBA range: start 0x0 length 0x1400000 00:23:38.226 ftl0 : 4.01 5955.94 23.27 0.00 0.00 21414.26 359.33 28359.21 00:23:38.226 =================================================================================================================== 00:23:38.226 Total : 5955.94 23.27 0.00 0.00 21414.26 0.00 28359.21 00:23:38.226 [2024-07-26 03:51:52.710850] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:23:38.226 0 00:23:38.226 03:51:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:23:38.226 [2024-07-26 03:51:52.960621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.226 [2024-07-26 03:51:52.960685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:38.226 [2024-07-26 03:51:52.960712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:38.226 [2024-07-26 03:51:52.960730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.226 [2024-07-26 03:51:52.960768] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:38.226 [2024-07-26 03:51:52.964093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.226 [2024-07-26 03:51:52.964135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:38.226 [2024-07-26 03:51:52.964152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.299 ms 00:23:38.226 [2024-07-26 03:51:52.964167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.226 [2024-07-26 03:51:52.965625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.226 [2024-07-26 03:51:52.965676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:38.226 [2024-07-26 03:51:52.965694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.428 ms 00:23:38.226 [2024-07-26 03:51:52.965710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.486 [2024-07-26 03:51:53.145075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.486 [2024-07-26 03:51:53.145163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:38.486 [2024-07-26 03:51:53.145186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 179.338 ms 00:23:38.486 [2024-07-26 03:51:53.145207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.486 [2024-07-26 03:51:53.151945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.486 [2024-07-26 03:51:53.151991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:38.486 [2024-07-26 03:51:53.152008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.687 ms 00:23:38.486 [2024-07-26 03:51:53.152024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.486 [2024-07-26 03:51:53.183153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.486 [2024-07-26 03:51:53.183216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:38.486 [2024-07-26 03:51:53.183238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.030 ms 00:23:38.486 [2024-07-26 03:51:53.183254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.486 [2024-07-26 03:51:53.201892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.486 [2024-07-26 03:51:53.201949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:38.486 [2024-07-26 03:51:53.201972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.588 ms 00:23:38.486 [2024-07-26 03:51:53.201988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.486 [2024-07-26 03:51:53.202172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.486 [2024-07-26 03:51:53.202199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:38.486 [2024-07-26 03:51:53.202215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:23:38.486 [2024-07-26 03:51:53.202232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.486 [2024-07-26 03:51:53.233518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.486 [2024-07-26 03:51:53.233571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:38.486 [2024-07-26 03:51:53.233590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.261 ms 00:23:38.486 [2024-07-26 03:51:53.233605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.486 [2024-07-26 03:51:53.264635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.486 [2024-07-26 03:51:53.264685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:38.486 [2024-07-26 03:51:53.264703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.982 ms 00:23:38.486 [2024-07-26 03:51:53.264719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.486 [2024-07-26 03:51:53.295402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.486 [2024-07-26 03:51:53.295451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:38.486 [2024-07-26 03:51:53.295470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.636 ms 00:23:38.486 [2024-07-26 03:51:53.295485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.486 [2024-07-26 03:51:53.326177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.486 [2024-07-26 03:51:53.326232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:38.486 [2024-07-26 03:51:53.326251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.579 ms 00:23:38.486 [2024-07-26 03:51:53.326269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.486 [2024-07-26 03:51:53.326320] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:38.486 [2024-07-26 03:51:53.326349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.326996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.327011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.327024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.327042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.327055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.327070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.327095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.327112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.327125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.327139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.327152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:38.486 [2024-07-26 03:51:53.327166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:38.487 [2024-07-26 03:51:53.327840] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:38.487 [2024-07-26 03:51:53.327855] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a77f0df0-e869-46fd-8d1d-1099c6b5c8dc 00:23:38.487 [2024-07-26 03:51:53.327870] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:38.487 [2024-07-26 03:51:53.327883] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:38.487 [2024-07-26 03:51:53.327896] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:38.487 [2024-07-26 03:51:53.327911] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:38.487 [2024-07-26 03:51:53.327924] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:38.487 [2024-07-26 03:51:53.327937] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:38.487 [2024-07-26 03:51:53.327951] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:38.487 [2024-07-26 03:51:53.327962] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:38.487 [2024-07-26 03:51:53.327977] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:38.487 [2024-07-26 03:51:53.327989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.487 [2024-07-26 03:51:53.328005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:38.487 [2024-07-26 03:51:53.328019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.671 ms 00:23:38.487 [2024-07-26 03:51:53.328033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.487 [2024-07-26 03:51:53.344674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.487 [2024-07-26 03:51:53.344742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:38.487 [2024-07-26 03:51:53.344762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.572 ms 00:23:38.487 [2024-07-26 03:51:53.344777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.487 [2024-07-26 03:51:53.345239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.487 [2024-07-26 03:51:53.345275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:38.487 [2024-07-26 03:51:53.345292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:23:38.487 [2024-07-26 03:51:53.345307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.487 [2024-07-26 03:51:53.385309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.487 [2024-07-26 03:51:53.385383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:38.487 [2024-07-26 03:51:53.385404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.487 [2024-07-26 03:51:53.385423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.487 [2024-07-26 03:51:53.385506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.487 [2024-07-26 03:51:53.385527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:38.487 [2024-07-26 03:51:53.385541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.487 [2024-07-26 03:51:53.385556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.487 [2024-07-26 03:51:53.385679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.487 [2024-07-26 03:51:53.385709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:38.487 [2024-07-26 03:51:53.385723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.487 [2024-07-26 03:51:53.385738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.487 [2024-07-26 03:51:53.385762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.487 [2024-07-26 03:51:53.385780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:38.487 [2024-07-26 03:51:53.385793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.487 [2024-07-26 03:51:53.385808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.746 [2024-07-26 03:51:53.484413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.746 [2024-07-26 03:51:53.484490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:38.746 [2024-07-26 03:51:53.484510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.746 [2024-07-26 03:51:53.484529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.746 [2024-07-26 03:51:53.568499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.746 [2024-07-26 03:51:53.568576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:38.746 [2024-07-26 03:51:53.568597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.746 [2024-07-26 03:51:53.568613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.746 [2024-07-26 03:51:53.568746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.746 [2024-07-26 03:51:53.568772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:38.746 [2024-07-26 03:51:53.568790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.746 [2024-07-26 03:51:53.568804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.746 [2024-07-26 03:51:53.568898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.746 [2024-07-26 03:51:53.568923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:38.746 [2024-07-26 03:51:53.568937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.746 [2024-07-26 03:51:53.568952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.746 [2024-07-26 03:51:53.569073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.746 [2024-07-26 03:51:53.569098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:38.746 [2024-07-26 03:51:53.569112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.746 [2024-07-26 03:51:53.569132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.746 [2024-07-26 03:51:53.569182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.746 [2024-07-26 03:51:53.569207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:38.746 [2024-07-26 03:51:53.569222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.746 [2024-07-26 03:51:53.569236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.746 [2024-07-26 03:51:53.569283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.746 [2024-07-26 03:51:53.569303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:38.746 [2024-07-26 03:51:53.569317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.746 [2024-07-26 03:51:53.569331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.746 [2024-07-26 03:51:53.569388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.746 [2024-07-26 03:51:53.569410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:38.746 [2024-07-26 03:51:53.569424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.746 [2024-07-26 03:51:53.569438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.746 [2024-07-26 03:51:53.569589] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 608.939 ms, result 0 00:23:38.746 true 00:23:38.746 03:51:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # killprocess 80716 00:23:38.746 03:51:53 ftl.ftl_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 80716 ']' 00:23:38.746 03:51:53 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # kill -0 80716 00:23:38.746 03:51:53 ftl.ftl_bdevperf -- common/autotest_common.sh@953 -- # uname 00:23:38.746 03:51:53 ftl.ftl_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:38.746 03:51:53 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80716 00:23:38.746 killing process with pid 80716 00:23:38.746 Received shutdown signal, test time was about 4.000000 seconds 00:23:38.746 00:23:38.746 Latency(us) 00:23:38.746 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.746 =================================================================================================================== 00:23:38.746 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:38.746 03:51:53 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:38.746 03:51:53 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:38.746 03:51:53 ftl.ftl_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80716' 00:23:38.746 03:51:53 ftl.ftl_bdevperf -- common/autotest_common.sh@967 -- # kill 80716 00:23:38.746 03:51:53 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # wait 80716 00:23:40.122 03:51:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:23:40.122 03:51:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:23:40.122 03:51:54 ftl.ftl_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:40.122 03:51:54 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:40.122 03:51:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@41 -- # remove_shm 00:23:40.122 Remove shared memory files 00:23:40.122 03:51:54 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:40.122 03:51:54 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:23:40.122 03:51:54 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:23:40.122 03:51:54 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:23:40.122 03:51:54 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:40.122 03:51:54 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:23:40.122 00:23:40.122 real 0m22.545s 00:23:40.122 user 0m26.512s 00:23:40.122 sys 0m1.100s 00:23:40.122 03:51:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:40.122 03:51:54 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:40.122 ************************************ 00:23:40.122 END TEST ftl_bdevperf 00:23:40.122 ************************************ 00:23:40.122 03:51:54 ftl -- common/autotest_common.sh@1142 -- # return 0 00:23:40.122 03:51:54 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:23:40.122 03:51:54 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:23:40.122 03:51:54 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:40.122 03:51:54 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:40.122 ************************************ 00:23:40.122 START TEST ftl_trim 00:23:40.122 ************************************ 00:23:40.122 03:51:54 ftl.ftl_trim -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:23:40.122 * Looking for test storage... 00:23:40.122 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:40.122 03:51:54 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:40.122 03:51:54 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:23:40.122 03:51:54 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:40.122 03:51:54 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:40.122 03:51:54 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:40.122 03:51:54 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:40.122 03:51:54 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:40.122 03:51:54 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:40.122 03:51:54 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:40.122 03:51:54 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:40.122 03:51:54 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:40.122 03:51:54 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:40.122 03:51:54 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:40.122 03:51:54 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:40.122 03:51:54 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:40.122 03:51:54 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:40.122 03:51:54 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:40.122 03:51:54 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:40.122 03:51:54 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:40.122 03:51:54 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:40.123 03:51:54 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:40.123 03:51:54 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:40.123 03:51:54 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:40.123 03:51:54 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:40.123 03:51:54 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:40.123 03:51:54 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:40.123 03:51:54 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:40.123 03:51:54 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:40.123 03:51:54 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:40.123 03:51:54 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:40.123 03:51:54 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:23:40.123 03:51:54 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:23:40.123 03:51:54 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:23:40.123 03:51:54 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:23:40.123 03:51:54 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:23:40.123 03:51:54 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:23:40.123 03:51:54 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:23:40.123 03:51:54 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:23:40.123 03:51:54 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:40.123 03:51:54 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:40.123 03:51:54 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:23:40.123 03:51:54 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=81065 00:23:40.123 03:51:54 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 81065 00:23:40.123 03:51:54 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:23:40.123 03:51:54 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 81065 ']' 00:23:40.123 03:51:54 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.123 03:51:54 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:40.123 03:51:54 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.123 03:51:54 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:40.123 03:51:54 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:40.381 [2024-07-26 03:51:55.079956] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:23:40.381 [2024-07-26 03:51:55.080856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81065 ] 00:23:40.381 [2024-07-26 03:51:55.256185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:40.639 [2024-07-26 03:51:55.486367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:40.639 [2024-07-26 03:51:55.486516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.639 [2024-07-26 03:51:55.486519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:41.573 03:51:56 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:41.573 03:51:56 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:23:41.573 03:51:56 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:41.573 03:51:56 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:23:41.573 03:51:56 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:41.573 03:51:56 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:23:41.573 03:51:56 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:23:41.573 03:51:56 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:41.831 03:51:56 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:41.831 03:51:56 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:23:41.831 03:51:56 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:41.831 03:51:56 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:23:41.831 03:51:56 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:41.831 03:51:56 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:23:41.831 03:51:56 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:23:41.831 03:51:56 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:42.090 03:51:56 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:42.090 { 00:23:42.090 "name": "nvme0n1", 00:23:42.090 "aliases": [ 00:23:42.090 "72597594-8fb6-42a3-b0d0-a3d2f7ee776b" 00:23:42.090 ], 00:23:42.090 "product_name": "NVMe disk", 00:23:42.090 "block_size": 4096, 00:23:42.090 "num_blocks": 1310720, 00:23:42.090 "uuid": "72597594-8fb6-42a3-b0d0-a3d2f7ee776b", 00:23:42.090 "assigned_rate_limits": { 00:23:42.090 "rw_ios_per_sec": 0, 00:23:42.090 "rw_mbytes_per_sec": 0, 00:23:42.090 "r_mbytes_per_sec": 0, 00:23:42.090 "w_mbytes_per_sec": 0 00:23:42.090 }, 00:23:42.090 "claimed": true, 00:23:42.090 "claim_type": "read_many_write_one", 00:23:42.090 "zoned": false, 00:23:42.090 "supported_io_types": { 00:23:42.090 "read": true, 00:23:42.090 "write": true, 00:23:42.090 "unmap": true, 00:23:42.090 "flush": true, 00:23:42.090 "reset": true, 00:23:42.090 "nvme_admin": true, 00:23:42.090 "nvme_io": true, 00:23:42.090 "nvme_io_md": false, 00:23:42.090 "write_zeroes": true, 00:23:42.090 "zcopy": false, 00:23:42.090 "get_zone_info": false, 00:23:42.090 "zone_management": false, 00:23:42.090 "zone_append": false, 00:23:42.090 "compare": true, 00:23:42.090 "compare_and_write": false, 00:23:42.090 "abort": true, 00:23:42.090 "seek_hole": false, 00:23:42.090 "seek_data": false, 00:23:42.090 "copy": true, 00:23:42.090 "nvme_iov_md": false 00:23:42.090 }, 00:23:42.090 "driver_specific": { 00:23:42.090 "nvme": [ 00:23:42.090 { 00:23:42.090 "pci_address": "0000:00:11.0", 00:23:42.090 "trid": { 00:23:42.090 "trtype": "PCIe", 00:23:42.090 "traddr": "0000:00:11.0" 00:23:42.090 }, 00:23:42.090 "ctrlr_data": { 00:23:42.090 "cntlid": 0, 00:23:42.090 "vendor_id": "0x1b36", 00:23:42.090 "model_number": "QEMU NVMe Ctrl", 00:23:42.090 "serial_number": "12341", 00:23:42.090 "firmware_revision": "8.0.0", 00:23:42.090 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:42.090 "oacs": { 00:23:42.090 "security": 0, 00:23:42.090 "format": 1, 00:23:42.090 "firmware": 0, 00:23:42.090 "ns_manage": 1 00:23:42.090 }, 00:23:42.090 "multi_ctrlr": false, 00:23:42.090 "ana_reporting": false 00:23:42.090 }, 00:23:42.091 "vs": { 00:23:42.091 "nvme_version": "1.4" 00:23:42.091 }, 00:23:42.091 "ns_data": { 00:23:42.091 "id": 1, 00:23:42.091 "can_share": false 00:23:42.091 } 00:23:42.091 } 00:23:42.091 ], 00:23:42.091 "mp_policy": "active_passive" 00:23:42.091 } 00:23:42.091 } 00:23:42.091 ]' 00:23:42.091 03:51:56 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:42.091 03:51:56 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:23:42.091 03:51:56 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:42.091 03:51:56 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:23:42.091 03:51:56 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:23:42.091 03:51:56 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:23:42.091 03:51:56 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:23:42.091 03:51:56 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:42.091 03:51:56 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:23:42.091 03:51:56 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:42.091 03:51:56 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:42.349 03:51:57 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=464a316e-1430-4300-9c84-ef6b5051e040 00:23:42.349 03:51:57 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:23:42.349 03:51:57 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 464a316e-1430-4300-9c84-ef6b5051e040 00:23:42.917 03:51:57 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:42.917 03:51:57 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=7559885e-44ca-4cc5-88fe-1153da12f9fd 00:23:42.917 03:51:57 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7559885e-44ca-4cc5-88fe-1153da12f9fd 00:23:43.175 03:51:58 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=e4e8b930-9d7f-46a5-b0dc-c9508c8cc11f 00:23:43.175 03:51:58 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e4e8b930-9d7f-46a5-b0dc-c9508c8cc11f 00:23:43.175 03:51:58 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:23:43.175 03:51:58 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:43.175 03:51:58 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=e4e8b930-9d7f-46a5-b0dc-c9508c8cc11f 00:23:43.175 03:51:58 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:23:43.175 03:51:58 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size e4e8b930-9d7f-46a5-b0dc-c9508c8cc11f 00:23:43.175 03:51:58 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=e4e8b930-9d7f-46a5-b0dc-c9508c8cc11f 00:23:43.175 03:51:58 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:43.175 03:51:58 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:23:43.175 03:51:58 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:23:43.175 03:51:58 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e4e8b930-9d7f-46a5-b0dc-c9508c8cc11f 00:23:43.434 03:51:58 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:43.434 { 00:23:43.434 "name": "e4e8b930-9d7f-46a5-b0dc-c9508c8cc11f", 00:23:43.434 "aliases": [ 00:23:43.434 "lvs/nvme0n1p0" 00:23:43.434 ], 00:23:43.434 "product_name": "Logical Volume", 00:23:43.434 "block_size": 4096, 00:23:43.434 "num_blocks": 26476544, 00:23:43.434 "uuid": "e4e8b930-9d7f-46a5-b0dc-c9508c8cc11f", 00:23:43.434 "assigned_rate_limits": { 00:23:43.434 "rw_ios_per_sec": 0, 00:23:43.434 "rw_mbytes_per_sec": 0, 00:23:43.434 "r_mbytes_per_sec": 0, 00:23:43.434 "w_mbytes_per_sec": 0 00:23:43.434 }, 00:23:43.434 "claimed": false, 00:23:43.434 "zoned": false, 00:23:43.434 "supported_io_types": { 00:23:43.434 "read": true, 00:23:43.434 "write": true, 00:23:43.434 "unmap": true, 00:23:43.434 "flush": false, 00:23:43.434 "reset": true, 00:23:43.434 "nvme_admin": false, 00:23:43.434 "nvme_io": false, 00:23:43.434 "nvme_io_md": false, 00:23:43.434 "write_zeroes": true, 00:23:43.434 "zcopy": false, 00:23:43.434 "get_zone_info": false, 00:23:43.434 "zone_management": false, 00:23:43.434 "zone_append": false, 00:23:43.434 "compare": false, 00:23:43.434 "compare_and_write": false, 00:23:43.434 "abort": false, 00:23:43.434 "seek_hole": true, 00:23:43.434 "seek_data": true, 00:23:43.434 "copy": false, 00:23:43.434 "nvme_iov_md": false 00:23:43.434 }, 00:23:43.434 "driver_specific": { 00:23:43.434 "lvol": { 00:23:43.434 "lvol_store_uuid": "7559885e-44ca-4cc5-88fe-1153da12f9fd", 00:23:43.434 "base_bdev": "nvme0n1", 00:23:43.434 "thin_provision": true, 00:23:43.434 "num_allocated_clusters": 0, 00:23:43.434 "snapshot": false, 00:23:43.434 "clone": false, 00:23:43.434 "esnap_clone": false 00:23:43.434 } 00:23:43.434 } 00:23:43.434 } 00:23:43.434 ]' 00:23:43.434 03:51:58 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:43.692 03:51:58 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:23:43.692 03:51:58 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:43.692 03:51:58 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:43.692 03:51:58 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:43.692 03:51:58 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:23:43.692 03:51:58 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:23:43.692 03:51:58 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:23:43.692 03:51:58 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:43.950 03:51:58 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:43.950 03:51:58 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:43.950 03:51:58 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size e4e8b930-9d7f-46a5-b0dc-c9508c8cc11f 00:23:43.950 03:51:58 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=e4e8b930-9d7f-46a5-b0dc-c9508c8cc11f 00:23:43.950 03:51:58 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:43.950 03:51:58 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:23:43.950 03:51:58 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:23:43.950 03:51:58 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e4e8b930-9d7f-46a5-b0dc-c9508c8cc11f 00:23:44.209 03:51:59 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:44.209 { 00:23:44.209 "name": "e4e8b930-9d7f-46a5-b0dc-c9508c8cc11f", 00:23:44.209 "aliases": [ 00:23:44.209 "lvs/nvme0n1p0" 00:23:44.209 ], 00:23:44.209 "product_name": "Logical Volume", 00:23:44.209 "block_size": 4096, 00:23:44.209 "num_blocks": 26476544, 00:23:44.209 "uuid": "e4e8b930-9d7f-46a5-b0dc-c9508c8cc11f", 00:23:44.209 "assigned_rate_limits": { 00:23:44.209 "rw_ios_per_sec": 0, 00:23:44.209 "rw_mbytes_per_sec": 0, 00:23:44.209 "r_mbytes_per_sec": 0, 00:23:44.209 "w_mbytes_per_sec": 0 00:23:44.209 }, 00:23:44.209 "claimed": false, 00:23:44.209 "zoned": false, 00:23:44.209 "supported_io_types": { 00:23:44.209 "read": true, 00:23:44.209 "write": true, 00:23:44.209 "unmap": true, 00:23:44.209 "flush": false, 00:23:44.209 "reset": true, 00:23:44.209 "nvme_admin": false, 00:23:44.209 "nvme_io": false, 00:23:44.209 "nvme_io_md": false, 00:23:44.209 "write_zeroes": true, 00:23:44.209 "zcopy": false, 00:23:44.209 "get_zone_info": false, 00:23:44.209 "zone_management": false, 00:23:44.209 "zone_append": false, 00:23:44.209 "compare": false, 00:23:44.209 "compare_and_write": false, 00:23:44.209 "abort": false, 00:23:44.209 "seek_hole": true, 00:23:44.209 "seek_data": true, 00:23:44.209 "copy": false, 00:23:44.209 "nvme_iov_md": false 00:23:44.209 }, 00:23:44.209 "driver_specific": { 00:23:44.209 "lvol": { 00:23:44.209 "lvol_store_uuid": "7559885e-44ca-4cc5-88fe-1153da12f9fd", 00:23:44.209 "base_bdev": "nvme0n1", 00:23:44.209 "thin_provision": true, 00:23:44.209 "num_allocated_clusters": 0, 00:23:44.209 "snapshot": false, 00:23:44.209 "clone": false, 00:23:44.209 "esnap_clone": false 00:23:44.209 } 00:23:44.209 } 00:23:44.209 } 00:23:44.209 ]' 00:23:44.209 03:51:59 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:44.209 03:51:59 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:23:44.209 03:51:59 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:44.468 03:51:59 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:44.468 03:51:59 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:44.468 03:51:59 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:23:44.468 03:51:59 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:23:44.468 03:51:59 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:44.729 03:51:59 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:23:44.729 03:51:59 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:23:44.729 03:51:59 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size e4e8b930-9d7f-46a5-b0dc-c9508c8cc11f 00:23:44.729 03:51:59 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=e4e8b930-9d7f-46a5-b0dc-c9508c8cc11f 00:23:44.729 03:51:59 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:44.729 03:51:59 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:23:44.729 03:51:59 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:23:44.729 03:51:59 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e4e8b930-9d7f-46a5-b0dc-c9508c8cc11f 00:23:44.988 03:51:59 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:44.988 { 00:23:44.988 "name": "e4e8b930-9d7f-46a5-b0dc-c9508c8cc11f", 00:23:44.988 "aliases": [ 00:23:44.988 "lvs/nvme0n1p0" 00:23:44.988 ], 00:23:44.988 "product_name": "Logical Volume", 00:23:44.988 "block_size": 4096, 00:23:44.988 "num_blocks": 26476544, 00:23:44.988 "uuid": "e4e8b930-9d7f-46a5-b0dc-c9508c8cc11f", 00:23:44.988 "assigned_rate_limits": { 00:23:44.988 "rw_ios_per_sec": 0, 00:23:44.988 "rw_mbytes_per_sec": 0, 00:23:44.988 "r_mbytes_per_sec": 0, 00:23:44.988 "w_mbytes_per_sec": 0 00:23:44.988 }, 00:23:44.988 "claimed": false, 00:23:44.988 "zoned": false, 00:23:44.988 "supported_io_types": { 00:23:44.988 "read": true, 00:23:44.988 "write": true, 00:23:44.988 "unmap": true, 00:23:44.988 "flush": false, 00:23:44.988 "reset": true, 00:23:44.988 "nvme_admin": false, 00:23:44.988 "nvme_io": false, 00:23:44.988 "nvme_io_md": false, 00:23:44.988 "write_zeroes": true, 00:23:44.988 "zcopy": false, 00:23:44.988 "get_zone_info": false, 00:23:44.988 "zone_management": false, 00:23:44.988 "zone_append": false, 00:23:44.988 "compare": false, 00:23:44.988 "compare_and_write": false, 00:23:44.988 "abort": false, 00:23:44.988 "seek_hole": true, 00:23:44.988 "seek_data": true, 00:23:44.988 "copy": false, 00:23:44.988 "nvme_iov_md": false 00:23:44.988 }, 00:23:44.988 "driver_specific": { 00:23:44.988 "lvol": { 00:23:44.988 "lvol_store_uuid": "7559885e-44ca-4cc5-88fe-1153da12f9fd", 00:23:44.988 "base_bdev": "nvme0n1", 00:23:44.988 "thin_provision": true, 00:23:44.988 "num_allocated_clusters": 0, 00:23:44.988 "snapshot": false, 00:23:44.988 "clone": false, 00:23:44.988 "esnap_clone": false 00:23:44.988 } 00:23:44.988 } 00:23:44.988 } 00:23:44.988 ]' 00:23:44.988 03:51:59 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:44.988 03:51:59 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:23:44.988 03:51:59 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:44.988 03:51:59 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:44.988 03:51:59 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:44.988 03:51:59 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:23:44.988 03:51:59 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:23:44.988 03:51:59 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e4e8b930-9d7f-46a5-b0dc-c9508c8cc11f -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:23:45.250 [2024-07-26 03:52:00.015641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.250 [2024-07-26 03:52:00.016174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:45.250 [2024-07-26 03:52:00.016312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:45.250 [2024-07-26 03:52:00.016428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.250 [2024-07-26 03:52:00.019950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.250 [2024-07-26 03:52:00.020097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:45.250 [2024-07-26 03:52:00.020200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.395 ms 00:23:45.250 [2024-07-26 03:52:00.020287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.250 [2024-07-26 03:52:00.020636] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:45.250 [2024-07-26 03:52:00.021694] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:45.250 [2024-07-26 03:52:00.021863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.250 [2024-07-26 03:52:00.021962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:45.250 [2024-07-26 03:52:00.022049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.237 ms 00:23:45.250 [2024-07-26 03:52:00.022139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.250 [2024-07-26 03:52:00.022426] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f7b734ba-648f-487f-8a7d-88c9b2c09583 00:23:45.250 [2024-07-26 03:52:00.023618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.250 [2024-07-26 03:52:00.023753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:45.250 [2024-07-26 03:52:00.023887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:23:45.250 [2024-07-26 03:52:00.023992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.250 [2024-07-26 03:52:00.028707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.250 [2024-07-26 03:52:00.028890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:45.250 [2024-07-26 03:52:00.028989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.529 ms 00:23:45.250 [2024-07-26 03:52:00.029012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.250 [2024-07-26 03:52:00.029201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.250 [2024-07-26 03:52:00.029224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:45.250 [2024-07-26 03:52:00.029242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:23:45.250 [2024-07-26 03:52:00.029254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.250 [2024-07-26 03:52:00.029313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.250 [2024-07-26 03:52:00.029335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:45.250 [2024-07-26 03:52:00.029351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:45.250 [2024-07-26 03:52:00.029364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.250 [2024-07-26 03:52:00.029412] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:45.250 [2024-07-26 03:52:00.033969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.250 [2024-07-26 03:52:00.034012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:45.250 [2024-07-26 03:52:00.034029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.571 ms 00:23:45.250 [2024-07-26 03:52:00.034043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.250 [2024-07-26 03:52:00.034138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.250 [2024-07-26 03:52:00.034164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:45.250 [2024-07-26 03:52:00.034179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:45.250 [2024-07-26 03:52:00.034193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.250 [2024-07-26 03:52:00.034248] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:45.250 [2024-07-26 03:52:00.034414] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:45.250 [2024-07-26 03:52:00.034433] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:45.250 [2024-07-26 03:52:00.034453] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:45.250 [2024-07-26 03:52:00.034470] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:45.250 [2024-07-26 03:52:00.034487] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:45.250 [2024-07-26 03:52:00.034502] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:45.250 [2024-07-26 03:52:00.034519] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:45.250 [2024-07-26 03:52:00.034531] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:45.250 [2024-07-26 03:52:00.034585] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:45.250 [2024-07-26 03:52:00.034610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.251 [2024-07-26 03:52:00.034633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:45.251 [2024-07-26 03:52:00.034650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:23:45.251 [2024-07-26 03:52:00.034664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.251 [2024-07-26 03:52:00.034772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.251 [2024-07-26 03:52:00.034792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:45.251 [2024-07-26 03:52:00.034806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:45.251 [2024-07-26 03:52:00.034843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.251 [2024-07-26 03:52:00.034977] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:45.251 [2024-07-26 03:52:00.035002] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:45.251 [2024-07-26 03:52:00.035015] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:45.251 [2024-07-26 03:52:00.035030] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:45.251 [2024-07-26 03:52:00.035043] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:45.251 [2024-07-26 03:52:00.035056] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:45.251 [2024-07-26 03:52:00.035067] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:45.251 [2024-07-26 03:52:00.035080] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:45.251 [2024-07-26 03:52:00.035092] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:45.251 [2024-07-26 03:52:00.035105] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:45.251 [2024-07-26 03:52:00.035116] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:45.251 [2024-07-26 03:52:00.035131] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:45.251 [2024-07-26 03:52:00.035142] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:45.251 [2024-07-26 03:52:00.035158] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:45.251 [2024-07-26 03:52:00.035170] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:45.251 [2024-07-26 03:52:00.035183] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:45.251 [2024-07-26 03:52:00.035194] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:45.251 [2024-07-26 03:52:00.035209] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:45.251 [2024-07-26 03:52:00.035220] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:45.251 [2024-07-26 03:52:00.035233] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:45.251 [2024-07-26 03:52:00.035247] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:45.251 [2024-07-26 03:52:00.035260] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:45.251 [2024-07-26 03:52:00.035272] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:45.251 [2024-07-26 03:52:00.035284] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:45.251 [2024-07-26 03:52:00.035296] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:45.251 [2024-07-26 03:52:00.035309] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:45.251 [2024-07-26 03:52:00.035320] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:45.251 [2024-07-26 03:52:00.035333] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:45.251 [2024-07-26 03:52:00.035344] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:45.251 [2024-07-26 03:52:00.035357] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:45.251 [2024-07-26 03:52:00.035368] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:45.251 [2024-07-26 03:52:00.035380] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:45.251 [2024-07-26 03:52:00.035392] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:45.251 [2024-07-26 03:52:00.035406] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:45.251 [2024-07-26 03:52:00.035418] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:45.251 [2024-07-26 03:52:00.035431] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:45.251 [2024-07-26 03:52:00.035441] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:45.251 [2024-07-26 03:52:00.035454] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:45.251 [2024-07-26 03:52:00.035466] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:45.251 [2024-07-26 03:52:00.035480] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:45.251 [2024-07-26 03:52:00.035492] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:45.251 [2024-07-26 03:52:00.035504] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:45.251 [2024-07-26 03:52:00.035515] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:45.251 [2024-07-26 03:52:00.035528] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:45.251 [2024-07-26 03:52:00.035540] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:45.251 [2024-07-26 03:52:00.035553] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:45.251 [2024-07-26 03:52:00.035565] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:45.251 [2024-07-26 03:52:00.035583] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:45.251 [2024-07-26 03:52:00.035595] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:45.251 [2024-07-26 03:52:00.035610] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:45.251 [2024-07-26 03:52:00.035622] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:45.251 [2024-07-26 03:52:00.035634] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:45.251 [2024-07-26 03:52:00.035646] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:45.251 [2024-07-26 03:52:00.035664] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:45.251 [2024-07-26 03:52:00.035679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:45.251 [2024-07-26 03:52:00.035696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:45.251 [2024-07-26 03:52:00.035709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:45.251 [2024-07-26 03:52:00.035723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:45.251 [2024-07-26 03:52:00.035735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:45.251 [2024-07-26 03:52:00.035749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:45.251 [2024-07-26 03:52:00.035762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:45.252 [2024-07-26 03:52:00.035776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:45.252 [2024-07-26 03:52:00.035788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:45.252 [2024-07-26 03:52:00.035804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:45.252 [2024-07-26 03:52:00.035829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:45.252 [2024-07-26 03:52:00.035849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:45.252 [2024-07-26 03:52:00.035862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:45.252 [2024-07-26 03:52:00.035876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:45.252 [2024-07-26 03:52:00.035889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:45.252 [2024-07-26 03:52:00.035903] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:45.252 [2024-07-26 03:52:00.035917] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:45.252 [2024-07-26 03:52:00.035933] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:45.252 [2024-07-26 03:52:00.035946] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:45.252 [2024-07-26 03:52:00.035960] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:45.252 [2024-07-26 03:52:00.035974] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:45.252 [2024-07-26 03:52:00.035990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.252 [2024-07-26 03:52:00.036003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:45.252 [2024-07-26 03:52:00.036018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.076 ms 00:23:45.252 [2024-07-26 03:52:00.036030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.252 [2024-07-26 03:52:00.036121] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:45.252 [2024-07-26 03:52:00.036140] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:47.158 [2024-07-26 03:52:02.027157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.158 [2024-07-26 03:52:02.027253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:47.158 [2024-07-26 03:52:02.027278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1991.034 ms 00:23:47.158 [2024-07-26 03:52:02.027292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.158 [2024-07-26 03:52:02.060772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.158 [2024-07-26 03:52:02.060880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:47.158 [2024-07-26 03:52:02.060906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.113 ms 00:23:47.158 [2024-07-26 03:52:02.060920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.158 [2024-07-26 03:52:02.061115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.158 [2024-07-26 03:52:02.061137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:47.158 [2024-07-26 03:52:02.061158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:23:47.158 [2024-07-26 03:52:02.061171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.417 [2024-07-26 03:52:02.111082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.417 [2024-07-26 03:52:02.111153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:47.417 [2024-07-26 03:52:02.111176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.862 ms 00:23:47.417 [2024-07-26 03:52:02.111191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.417 [2024-07-26 03:52:02.111358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.417 [2024-07-26 03:52:02.111380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:47.417 [2024-07-26 03:52:02.111396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:47.417 [2024-07-26 03:52:02.111409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.417 [2024-07-26 03:52:02.111767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.417 [2024-07-26 03:52:02.111793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:47.417 [2024-07-26 03:52:02.111811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:23:47.417 [2024-07-26 03:52:02.111840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.417 [2024-07-26 03:52:02.111994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.417 [2024-07-26 03:52:02.112012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:47.417 [2024-07-26 03:52:02.112027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:23:47.417 [2024-07-26 03:52:02.112039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.417 [2024-07-26 03:52:02.130239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.417 [2024-07-26 03:52:02.130296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:47.417 [2024-07-26 03:52:02.130318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.155 ms 00:23:47.417 [2024-07-26 03:52:02.130331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.417 [2024-07-26 03:52:02.144001] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:47.417 [2024-07-26 03:52:02.158188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.417 [2024-07-26 03:52:02.158261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:47.417 [2024-07-26 03:52:02.158285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.686 ms 00:23:47.417 [2024-07-26 03:52:02.158300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.417 [2024-07-26 03:52:02.222759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.417 [2024-07-26 03:52:02.222850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:47.417 [2024-07-26 03:52:02.222873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.320 ms 00:23:47.417 [2024-07-26 03:52:02.222888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.417 [2024-07-26 03:52:02.223199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.417 [2024-07-26 03:52:02.223231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:47.417 [2024-07-26 03:52:02.223247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:23:47.417 [2024-07-26 03:52:02.223265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.417 [2024-07-26 03:52:02.254906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.417 [2024-07-26 03:52:02.254970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:47.417 [2024-07-26 03:52:02.254991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.596 ms 00:23:47.417 [2024-07-26 03:52:02.255006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.417 [2024-07-26 03:52:02.286231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.417 [2024-07-26 03:52:02.286295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:47.417 [2024-07-26 03:52:02.286317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.110 ms 00:23:47.417 [2024-07-26 03:52:02.286331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.417 [2024-07-26 03:52:02.287169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.417 [2024-07-26 03:52:02.287202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:47.417 [2024-07-26 03:52:02.287219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.728 ms 00:23:47.417 [2024-07-26 03:52:02.287233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.675 [2024-07-26 03:52:02.387933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.675 [2024-07-26 03:52:02.388020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:47.675 [2024-07-26 03:52:02.388057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.653 ms 00:23:47.675 [2024-07-26 03:52:02.388082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.675 [2024-07-26 03:52:02.427420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.675 [2024-07-26 03:52:02.427495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:47.675 [2024-07-26 03:52:02.427524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.202 ms 00:23:47.675 [2024-07-26 03:52:02.427542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.675 [2024-07-26 03:52:02.466191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.675 [2024-07-26 03:52:02.466265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:47.675 [2024-07-26 03:52:02.466289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.523 ms 00:23:47.675 [2024-07-26 03:52:02.466306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.675 [2024-07-26 03:52:02.504812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.675 [2024-07-26 03:52:02.504886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:47.675 [2024-07-26 03:52:02.504910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.375 ms 00:23:47.675 [2024-07-26 03:52:02.504928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.675 [2024-07-26 03:52:02.505060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.675 [2024-07-26 03:52:02.505092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:47.675 [2024-07-26 03:52:02.505111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:47.675 [2024-07-26 03:52:02.505132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.675 [2024-07-26 03:52:02.505241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.675 [2024-07-26 03:52:02.505266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:47.675 [2024-07-26 03:52:02.505283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:23:47.675 [2024-07-26 03:52:02.505327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.675 [2024-07-26 03:52:02.506520] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:47.675 [2024-07-26 03:52:02.511619] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2490.473 ms, result 0 00:23:47.675 [2024-07-26 03:52:02.512486] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:47.675 { 00:23:47.675 "name": "ftl0", 00:23:47.675 "uuid": "f7b734ba-648f-487f-8a7d-88c9b2c09583" 00:23:47.675 } 00:23:47.675 03:52:02 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:23:47.675 03:52:02 ftl.ftl_trim -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:23:47.675 03:52:02 ftl.ftl_trim -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:47.675 03:52:02 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local i 00:23:47.675 03:52:02 ftl.ftl_trim -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:47.675 03:52:02 ftl.ftl_trim -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:47.675 03:52:02 ftl.ftl_trim -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:23:47.932 03:52:02 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:23:48.191 [ 00:23:48.191 { 00:23:48.191 "name": "ftl0", 00:23:48.191 "aliases": [ 00:23:48.191 "f7b734ba-648f-487f-8a7d-88c9b2c09583" 00:23:48.191 ], 00:23:48.191 "product_name": "FTL disk", 00:23:48.191 "block_size": 4096, 00:23:48.191 "num_blocks": 23592960, 00:23:48.191 "uuid": "f7b734ba-648f-487f-8a7d-88c9b2c09583", 00:23:48.191 "assigned_rate_limits": { 00:23:48.191 "rw_ios_per_sec": 0, 00:23:48.191 "rw_mbytes_per_sec": 0, 00:23:48.191 "r_mbytes_per_sec": 0, 00:23:48.191 "w_mbytes_per_sec": 0 00:23:48.191 }, 00:23:48.191 "claimed": false, 00:23:48.191 "zoned": false, 00:23:48.191 "supported_io_types": { 00:23:48.191 "read": true, 00:23:48.191 "write": true, 00:23:48.191 "unmap": true, 00:23:48.191 "flush": true, 00:23:48.191 "reset": false, 00:23:48.191 "nvme_admin": false, 00:23:48.191 "nvme_io": false, 00:23:48.191 "nvme_io_md": false, 00:23:48.191 "write_zeroes": true, 00:23:48.191 "zcopy": false, 00:23:48.191 "get_zone_info": false, 00:23:48.191 "zone_management": false, 00:23:48.191 "zone_append": false, 00:23:48.191 "compare": false, 00:23:48.191 "compare_and_write": false, 00:23:48.191 "abort": false, 00:23:48.191 "seek_hole": false, 00:23:48.191 "seek_data": false, 00:23:48.191 "copy": false, 00:23:48.191 "nvme_iov_md": false 00:23:48.191 }, 00:23:48.191 "driver_specific": { 00:23:48.191 "ftl": { 00:23:48.191 "base_bdev": "e4e8b930-9d7f-46a5-b0dc-c9508c8cc11f", 00:23:48.191 "cache": "nvc0n1p0" 00:23:48.191 } 00:23:48.191 } 00:23:48.191 } 00:23:48.191 ] 00:23:48.191 03:52:03 ftl.ftl_trim -- common/autotest_common.sh@905 -- # return 0 00:23:48.191 03:52:03 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:23:48.191 03:52:03 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:48.449 03:52:03 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:23:48.449 03:52:03 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:23:48.707 03:52:03 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:23:48.707 { 00:23:48.707 "name": "ftl0", 00:23:48.707 "aliases": [ 00:23:48.707 "f7b734ba-648f-487f-8a7d-88c9b2c09583" 00:23:48.707 ], 00:23:48.707 "product_name": "FTL disk", 00:23:48.707 "block_size": 4096, 00:23:48.707 "num_blocks": 23592960, 00:23:48.707 "uuid": "f7b734ba-648f-487f-8a7d-88c9b2c09583", 00:23:48.707 "assigned_rate_limits": { 00:23:48.707 "rw_ios_per_sec": 0, 00:23:48.707 "rw_mbytes_per_sec": 0, 00:23:48.707 "r_mbytes_per_sec": 0, 00:23:48.707 "w_mbytes_per_sec": 0 00:23:48.707 }, 00:23:48.707 "claimed": false, 00:23:48.707 "zoned": false, 00:23:48.707 "supported_io_types": { 00:23:48.707 "read": true, 00:23:48.707 "write": true, 00:23:48.707 "unmap": true, 00:23:48.707 "flush": true, 00:23:48.707 "reset": false, 00:23:48.707 "nvme_admin": false, 00:23:48.707 "nvme_io": false, 00:23:48.707 "nvme_io_md": false, 00:23:48.707 "write_zeroes": true, 00:23:48.707 "zcopy": false, 00:23:48.707 "get_zone_info": false, 00:23:48.707 "zone_management": false, 00:23:48.707 "zone_append": false, 00:23:48.707 "compare": false, 00:23:48.707 "compare_and_write": false, 00:23:48.707 "abort": false, 00:23:48.707 "seek_hole": false, 00:23:48.707 "seek_data": false, 00:23:48.707 "copy": false, 00:23:48.707 "nvme_iov_md": false 00:23:48.707 }, 00:23:48.707 "driver_specific": { 00:23:48.707 "ftl": { 00:23:48.707 "base_bdev": "e4e8b930-9d7f-46a5-b0dc-c9508c8cc11f", 00:23:48.707 "cache": "nvc0n1p0" 00:23:48.707 } 00:23:48.707 } 00:23:48.707 } 00:23:48.707 ]' 00:23:48.707 03:52:03 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:23:48.965 03:52:03 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:23:48.965 03:52:03 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:48.965 [2024-07-26 03:52:03.867344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.965 [2024-07-26 03:52:03.867426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:48.965 [2024-07-26 03:52:03.867455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:48.965 [2024-07-26 03:52:03.867470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.965 [2024-07-26 03:52:03.867523] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:49.223 [2024-07-26 03:52:03.870879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.223 [2024-07-26 03:52:03.870921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:49.223 [2024-07-26 03:52:03.870939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.328 ms 00:23:49.223 [2024-07-26 03:52:03.870957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.223 [2024-07-26 03:52:03.871566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.223 [2024-07-26 03:52:03.871607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:49.223 [2024-07-26 03:52:03.871625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:23:49.223 [2024-07-26 03:52:03.871646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.223 [2024-07-26 03:52:03.875378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.223 [2024-07-26 03:52:03.875415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:49.223 [2024-07-26 03:52:03.875432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.697 ms 00:23:49.223 [2024-07-26 03:52:03.875447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.224 [2024-07-26 03:52:03.882976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.224 [2024-07-26 03:52:03.883018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:49.224 [2024-07-26 03:52:03.883035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.476 ms 00:23:49.224 [2024-07-26 03:52:03.883050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.224 [2024-07-26 03:52:03.914509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.224 [2024-07-26 03:52:03.914581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:49.224 [2024-07-26 03:52:03.914605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.353 ms 00:23:49.224 [2024-07-26 03:52:03.914625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.224 [2024-07-26 03:52:03.933630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.224 [2024-07-26 03:52:03.933714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:49.224 [2024-07-26 03:52:03.933740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.880 ms 00:23:49.224 [2024-07-26 03:52:03.933756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.224 [2024-07-26 03:52:03.934063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.224 [2024-07-26 03:52:03.934092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:49.224 [2024-07-26 03:52:03.934107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:23:49.224 [2024-07-26 03:52:03.934122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.224 [2024-07-26 03:52:03.966050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.224 [2024-07-26 03:52:03.966137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:49.224 [2024-07-26 03:52:03.966161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.888 ms 00:23:49.224 [2024-07-26 03:52:03.966177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.224 [2024-07-26 03:52:03.998668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.224 [2024-07-26 03:52:03.998754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:49.224 [2024-07-26 03:52:03.998777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.363 ms 00:23:49.224 [2024-07-26 03:52:03.998797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.224 [2024-07-26 03:52:04.029778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.224 [2024-07-26 03:52:04.029879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:49.224 [2024-07-26 03:52:04.029902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.834 ms 00:23:49.224 [2024-07-26 03:52:04.029918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.224 [2024-07-26 03:52:04.060962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.224 [2024-07-26 03:52:04.061030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:49.224 [2024-07-26 03:52:04.061052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.859 ms 00:23:49.224 [2024-07-26 03:52:04.061068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.224 [2024-07-26 03:52:04.061175] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:49.224 [2024-07-26 03:52:04.061209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:49.224 [2024-07-26 03:52:04.061907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.061924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.061937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.061952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.061965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.061979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.061992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:49.225 [2024-07-26 03:52:04.062722] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:49.225 [2024-07-26 03:52:04.062736] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f7b734ba-648f-487f-8a7d-88c9b2c09583 00:23:49.225 [2024-07-26 03:52:04.062753] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:49.225 [2024-07-26 03:52:04.062768] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:49.225 [2024-07-26 03:52:04.062781] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:49.225 [2024-07-26 03:52:04.062794] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:49.225 [2024-07-26 03:52:04.062807] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:49.225 [2024-07-26 03:52:04.062831] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:49.225 [2024-07-26 03:52:04.062848] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:49.225 [2024-07-26 03:52:04.062859] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:49.225 [2024-07-26 03:52:04.062871] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:49.225 [2024-07-26 03:52:04.062884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.225 [2024-07-26 03:52:04.062899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:49.225 [2024-07-26 03:52:04.062913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.711 ms 00:23:49.225 [2024-07-26 03:52:04.062927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.225 [2024-07-26 03:52:04.079715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.225 [2024-07-26 03:52:04.079772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:49.225 [2024-07-26 03:52:04.079792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.743 ms 00:23:49.225 [2024-07-26 03:52:04.079810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.225 [2024-07-26 03:52:04.080340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.225 [2024-07-26 03:52:04.080381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:49.225 [2024-07-26 03:52:04.080399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:23:49.225 [2024-07-26 03:52:04.080414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.483 [2024-07-26 03:52:04.138497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.483 [2024-07-26 03:52:04.138561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:49.483 [2024-07-26 03:52:04.138592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.483 [2024-07-26 03:52:04.138609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.483 [2024-07-26 03:52:04.138765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.483 [2024-07-26 03:52:04.138790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:49.483 [2024-07-26 03:52:04.138805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.483 [2024-07-26 03:52:04.138834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.483 [2024-07-26 03:52:04.138940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.483 [2024-07-26 03:52:04.138965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:49.483 [2024-07-26 03:52:04.138980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.483 [2024-07-26 03:52:04.138997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.483 [2024-07-26 03:52:04.139038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.483 [2024-07-26 03:52:04.139056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:49.484 [2024-07-26 03:52:04.139069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.484 [2024-07-26 03:52:04.139082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.484 [2024-07-26 03:52:04.244972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.484 [2024-07-26 03:52:04.245041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:49.484 [2024-07-26 03:52:04.245063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.484 [2024-07-26 03:52:04.245078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.484 [2024-07-26 03:52:04.329393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.484 [2024-07-26 03:52:04.329472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:49.484 [2024-07-26 03:52:04.329494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.484 [2024-07-26 03:52:04.329509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.484 [2024-07-26 03:52:04.329637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.484 [2024-07-26 03:52:04.329667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:49.484 [2024-07-26 03:52:04.329683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.484 [2024-07-26 03:52:04.329699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.484 [2024-07-26 03:52:04.329762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.484 [2024-07-26 03:52:04.329782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:49.484 [2024-07-26 03:52:04.329795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.484 [2024-07-26 03:52:04.329809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.484 [2024-07-26 03:52:04.329990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.484 [2024-07-26 03:52:04.330016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:49.484 [2024-07-26 03:52:04.330052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.484 [2024-07-26 03:52:04.330067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.484 [2024-07-26 03:52:04.330143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.484 [2024-07-26 03:52:04.330167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:49.484 [2024-07-26 03:52:04.330181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.484 [2024-07-26 03:52:04.330195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.484 [2024-07-26 03:52:04.330266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.484 [2024-07-26 03:52:04.330287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:49.484 [2024-07-26 03:52:04.330303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.484 [2024-07-26 03:52:04.330319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.484 [2024-07-26 03:52:04.330391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.484 [2024-07-26 03:52:04.330413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:49.484 [2024-07-26 03:52:04.330426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.484 [2024-07-26 03:52:04.330440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.484 [2024-07-26 03:52:04.330675] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 463.327 ms, result 0 00:23:49.484 true 00:23:49.484 03:52:04 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 81065 00:23:49.484 03:52:04 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 81065 ']' 00:23:49.484 03:52:04 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 81065 00:23:49.484 03:52:04 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:23:49.484 03:52:04 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:49.484 03:52:04 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81065 00:23:49.484 killing process with pid 81065 00:23:49.484 03:52:04 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:49.484 03:52:04 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:49.484 03:52:04 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81065' 00:23:49.484 03:52:04 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 81065 00:23:49.484 03:52:04 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 81065 00:23:54.779 03:52:08 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:23:55.713 65536+0 records in 00:23:55.713 65536+0 records out 00:23:55.713 268435456 bytes (268 MB, 256 MiB) copied, 1.43609 s, 187 MB/s 00:23:55.713 03:52:10 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:55.713 [2024-07-26 03:52:10.458795] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:23:55.713 [2024-07-26 03:52:10.458993] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81264 ] 00:23:55.971 [2024-07-26 03:52:10.629524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.971 [2024-07-26 03:52:10.856746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.548 [2024-07-26 03:52:11.176964] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:56.548 [2024-07-26 03:52:11.177045] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:56.548 [2024-07-26 03:52:11.337811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.548 [2024-07-26 03:52:11.337890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:56.548 [2024-07-26 03:52:11.337912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:56.548 [2024-07-26 03:52:11.337924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.548 [2024-07-26 03:52:11.341080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.548 [2024-07-26 03:52:11.341126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:56.548 [2024-07-26 03:52:11.341143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.125 ms 00:23:56.548 [2024-07-26 03:52:11.341155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.548 [2024-07-26 03:52:11.341276] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:56.548 [2024-07-26 03:52:11.342237] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:56.548 [2024-07-26 03:52:11.342279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.548 [2024-07-26 03:52:11.342294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:56.548 [2024-07-26 03:52:11.342307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.013 ms 00:23:56.548 [2024-07-26 03:52:11.342319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.548 [2024-07-26 03:52:11.343522] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:56.548 [2024-07-26 03:52:11.360228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.548 [2024-07-26 03:52:11.360287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:56.548 [2024-07-26 03:52:11.360328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.707 ms 00:23:56.548 [2024-07-26 03:52:11.360341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.548 [2024-07-26 03:52:11.360464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.548 [2024-07-26 03:52:11.360487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:56.548 [2024-07-26 03:52:11.360501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:23:56.548 [2024-07-26 03:52:11.360527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.548 [2024-07-26 03:52:11.365031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.548 [2024-07-26 03:52:11.365091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:56.548 [2024-07-26 03:52:11.365108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.429 ms 00:23:56.548 [2024-07-26 03:52:11.365120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.548 [2024-07-26 03:52:11.365253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.548 [2024-07-26 03:52:11.365276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:56.548 [2024-07-26 03:52:11.365289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:23:56.548 [2024-07-26 03:52:11.365300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.548 [2024-07-26 03:52:11.365345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.548 [2024-07-26 03:52:11.365363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:56.548 [2024-07-26 03:52:11.365379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:56.548 [2024-07-26 03:52:11.365391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.548 [2024-07-26 03:52:11.365424] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:56.548 [2024-07-26 03:52:11.369765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.548 [2024-07-26 03:52:11.369804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:56.548 [2024-07-26 03:52:11.369837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.350 ms 00:23:56.548 [2024-07-26 03:52:11.369850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.548 [2024-07-26 03:52:11.369923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.548 [2024-07-26 03:52:11.369942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:56.548 [2024-07-26 03:52:11.369956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:56.548 [2024-07-26 03:52:11.369967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.548 [2024-07-26 03:52:11.369997] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:56.548 [2024-07-26 03:52:11.370027] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:56.548 [2024-07-26 03:52:11.370075] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:56.548 [2024-07-26 03:52:11.370097] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:23:56.548 [2024-07-26 03:52:11.370202] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:56.548 [2024-07-26 03:52:11.370218] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:56.548 [2024-07-26 03:52:11.370232] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:56.548 [2024-07-26 03:52:11.370247] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:56.548 [2024-07-26 03:52:11.370260] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:56.548 [2024-07-26 03:52:11.370278] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:56.548 [2024-07-26 03:52:11.370289] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:56.548 [2024-07-26 03:52:11.370301] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:56.548 [2024-07-26 03:52:11.370312] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:56.548 [2024-07-26 03:52:11.370324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.548 [2024-07-26 03:52:11.370335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:56.548 [2024-07-26 03:52:11.370347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:23:56.548 [2024-07-26 03:52:11.370358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.548 [2024-07-26 03:52:11.370460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.548 [2024-07-26 03:52:11.370484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:56.548 [2024-07-26 03:52:11.370503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:23:56.548 [2024-07-26 03:52:11.370514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.548 [2024-07-26 03:52:11.370663] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:56.548 [2024-07-26 03:52:11.370684] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:56.548 [2024-07-26 03:52:11.370698] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:56.548 [2024-07-26 03:52:11.370710] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:56.548 [2024-07-26 03:52:11.370721] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:56.548 [2024-07-26 03:52:11.370732] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:56.548 [2024-07-26 03:52:11.370743] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:56.548 [2024-07-26 03:52:11.370754] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:56.548 [2024-07-26 03:52:11.370766] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:56.548 [2024-07-26 03:52:11.370776] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:56.548 [2024-07-26 03:52:11.370787] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:56.548 [2024-07-26 03:52:11.370797] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:56.548 [2024-07-26 03:52:11.370808] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:56.548 [2024-07-26 03:52:11.370838] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:56.548 [2024-07-26 03:52:11.370852] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:56.548 [2024-07-26 03:52:11.370865] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:56.548 [2024-07-26 03:52:11.370876] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:56.548 [2024-07-26 03:52:11.370886] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:56.548 [2024-07-26 03:52:11.370911] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:56.548 [2024-07-26 03:52:11.370923] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:56.548 [2024-07-26 03:52:11.370934] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:56.549 [2024-07-26 03:52:11.370945] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:56.549 [2024-07-26 03:52:11.370957] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:56.549 [2024-07-26 03:52:11.370967] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:56.549 [2024-07-26 03:52:11.370977] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:56.549 [2024-07-26 03:52:11.370987] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:56.549 [2024-07-26 03:52:11.370998] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:56.549 [2024-07-26 03:52:11.371008] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:56.549 [2024-07-26 03:52:11.371018] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:56.549 [2024-07-26 03:52:11.371029] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:56.549 [2024-07-26 03:52:11.371039] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:56.549 [2024-07-26 03:52:11.371049] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:56.549 [2024-07-26 03:52:11.371059] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:56.549 [2024-07-26 03:52:11.371069] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:56.549 [2024-07-26 03:52:11.371079] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:56.549 [2024-07-26 03:52:11.371090] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:56.549 [2024-07-26 03:52:11.371100] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:56.549 [2024-07-26 03:52:11.371110] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:56.549 [2024-07-26 03:52:11.371121] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:56.549 [2024-07-26 03:52:11.371131] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:56.549 [2024-07-26 03:52:11.371141] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:56.549 [2024-07-26 03:52:11.371151] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:56.549 [2024-07-26 03:52:11.371161] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:56.549 [2024-07-26 03:52:11.371171] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:56.549 [2024-07-26 03:52:11.371183] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:56.549 [2024-07-26 03:52:11.371193] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:56.549 [2024-07-26 03:52:11.371204] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:56.549 [2024-07-26 03:52:11.371221] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:56.549 [2024-07-26 03:52:11.371233] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:56.549 [2024-07-26 03:52:11.371244] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:56.549 [2024-07-26 03:52:11.371255] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:56.549 [2024-07-26 03:52:11.371265] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:56.549 [2024-07-26 03:52:11.371275] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:56.549 [2024-07-26 03:52:11.371287] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:56.549 [2024-07-26 03:52:11.371301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:56.549 [2024-07-26 03:52:11.371314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:56.549 [2024-07-26 03:52:11.371325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:56.549 [2024-07-26 03:52:11.371337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:56.549 [2024-07-26 03:52:11.371348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:56.549 [2024-07-26 03:52:11.371359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:56.549 [2024-07-26 03:52:11.371370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:56.549 [2024-07-26 03:52:11.371382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:56.549 [2024-07-26 03:52:11.371393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:56.549 [2024-07-26 03:52:11.371404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:56.549 [2024-07-26 03:52:11.371415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:56.549 [2024-07-26 03:52:11.371427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:56.549 [2024-07-26 03:52:11.371438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:56.549 [2024-07-26 03:52:11.371449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:56.549 [2024-07-26 03:52:11.371461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:56.549 [2024-07-26 03:52:11.371472] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:56.549 [2024-07-26 03:52:11.371485] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:56.549 [2024-07-26 03:52:11.371497] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:56.549 [2024-07-26 03:52:11.371508] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:56.549 [2024-07-26 03:52:11.371519] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:56.549 [2024-07-26 03:52:11.371531] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:56.549 [2024-07-26 03:52:11.371543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.549 [2024-07-26 03:52:11.371555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:56.549 [2024-07-26 03:52:11.371566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.949 ms 00:23:56.549 [2024-07-26 03:52:11.371577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.549 [2024-07-26 03:52:11.423061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.549 [2024-07-26 03:52:11.423130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:56.549 [2024-07-26 03:52:11.423158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.398 ms 00:23:56.549 [2024-07-26 03:52:11.423171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.549 [2024-07-26 03:52:11.423375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.549 [2024-07-26 03:52:11.423398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:56.549 [2024-07-26 03:52:11.423419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:23:56.549 [2024-07-26 03:52:11.423431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.807 [2024-07-26 03:52:11.462639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.807 [2024-07-26 03:52:11.462700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:56.807 [2024-07-26 03:52:11.462721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.171 ms 00:23:56.807 [2024-07-26 03:52:11.462733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.807 [2024-07-26 03:52:11.462938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.807 [2024-07-26 03:52:11.462961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:56.807 [2024-07-26 03:52:11.462975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:56.807 [2024-07-26 03:52:11.462987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.807 [2024-07-26 03:52:11.463311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.807 [2024-07-26 03:52:11.463337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:56.807 [2024-07-26 03:52:11.463351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:23:56.807 [2024-07-26 03:52:11.463362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.807 [2024-07-26 03:52:11.463541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.807 [2024-07-26 03:52:11.463562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:56.807 [2024-07-26 03:52:11.463575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:23:56.807 [2024-07-26 03:52:11.463587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.807 [2024-07-26 03:52:11.480082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.807 [2024-07-26 03:52:11.480139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:56.807 [2024-07-26 03:52:11.480157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.463 ms 00:23:56.807 [2024-07-26 03:52:11.480169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.807 [2024-07-26 03:52:11.497373] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:23:56.807 [2024-07-26 03:52:11.497420] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:56.807 [2024-07-26 03:52:11.497440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.807 [2024-07-26 03:52:11.497453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:56.807 [2024-07-26 03:52:11.497467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.101 ms 00:23:56.807 [2024-07-26 03:52:11.497478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.807 [2024-07-26 03:52:11.528447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.807 [2024-07-26 03:52:11.528498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:56.807 [2024-07-26 03:52:11.528516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.859 ms 00:23:56.807 [2024-07-26 03:52:11.528528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.807 [2024-07-26 03:52:11.544275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.807 [2024-07-26 03:52:11.544322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:56.807 [2024-07-26 03:52:11.544341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.635 ms 00:23:56.807 [2024-07-26 03:52:11.544352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.807 [2024-07-26 03:52:11.559908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.807 [2024-07-26 03:52:11.559966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:56.808 [2024-07-26 03:52:11.559987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.452 ms 00:23:56.808 [2024-07-26 03:52:11.559999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.808 [2024-07-26 03:52:11.560919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.808 [2024-07-26 03:52:11.560966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:56.808 [2024-07-26 03:52:11.560983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.739 ms 00:23:56.808 [2024-07-26 03:52:11.560994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.808 [2024-07-26 03:52:11.634276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.808 [2024-07-26 03:52:11.634352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:56.808 [2024-07-26 03:52:11.634374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.247 ms 00:23:56.808 [2024-07-26 03:52:11.634386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.808 [2024-07-26 03:52:11.647135] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:56.808 [2024-07-26 03:52:11.660885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.808 [2024-07-26 03:52:11.660955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:56.808 [2024-07-26 03:52:11.660976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.342 ms 00:23:56.808 [2024-07-26 03:52:11.660987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.808 [2024-07-26 03:52:11.661134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.808 [2024-07-26 03:52:11.661156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:56.808 [2024-07-26 03:52:11.661176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:56.808 [2024-07-26 03:52:11.661188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.808 [2024-07-26 03:52:11.661253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.808 [2024-07-26 03:52:11.661271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:56.808 [2024-07-26 03:52:11.661283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:56.808 [2024-07-26 03:52:11.661295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.808 [2024-07-26 03:52:11.661328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.808 [2024-07-26 03:52:11.661344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:56.808 [2024-07-26 03:52:11.661357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:56.808 [2024-07-26 03:52:11.661373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.808 [2024-07-26 03:52:11.661411] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:56.808 [2024-07-26 03:52:11.661428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.808 [2024-07-26 03:52:11.661440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:56.808 [2024-07-26 03:52:11.661452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:23:56.808 [2024-07-26 03:52:11.661464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.808 [2024-07-26 03:52:11.692652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.808 [2024-07-26 03:52:11.692706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:56.808 [2024-07-26 03:52:11.692735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.156 ms 00:23:56.808 [2024-07-26 03:52:11.692747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.808 [2024-07-26 03:52:11.692915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.808 [2024-07-26 03:52:11.692939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:56.808 [2024-07-26 03:52:11.692953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:56.808 [2024-07-26 03:52:11.692964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.808 [2024-07-26 03:52:11.693894] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:56.808 [2024-07-26 03:52:11.697972] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 355.738 ms, result 0 00:23:56.808 [2024-07-26 03:52:11.698786] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:57.065 [2024-07-26 03:52:11.715214] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:07.202  Copying: 25/256 [MB] (25 MBps) Copying: 50/256 [MB] (24 MBps) Copying: 73/256 [MB] (22 MBps) Copying: 97/256 [MB] (24 MBps) Copying: 122/256 [MB] (25 MBps) Copying: 147/256 [MB] (24 MBps) Copying: 170/256 [MB] (23 MBps) Copying: 195/256 [MB] (25 MBps) Copying: 222/256 [MB] (26 MBps) Copying: 249/256 [MB] (27 MBps) Copying: 256/256 [MB] (average 25 MBps)[2024-07-26 03:52:21.940160] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:07.202 [2024-07-26 03:52:21.952473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.202 [2024-07-26 03:52:21.952518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:07.202 [2024-07-26 03:52:21.952538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:07.202 [2024-07-26 03:52:21.952550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.202 [2024-07-26 03:52:21.952582] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:07.202 [2024-07-26 03:52:21.955829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.202 [2024-07-26 03:52:21.955865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:07.202 [2024-07-26 03:52:21.955879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.223 ms 00:24:07.202 [2024-07-26 03:52:21.955891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.202 [2024-07-26 03:52:21.957430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.202 [2024-07-26 03:52:21.957468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:07.202 [2024-07-26 03:52:21.957484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.507 ms 00:24:07.202 [2024-07-26 03:52:21.957495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.202 [2024-07-26 03:52:21.964890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.202 [2024-07-26 03:52:21.964923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:07.202 [2024-07-26 03:52:21.964938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.370 ms 00:24:07.202 [2024-07-26 03:52:21.964958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.202 [2024-07-26 03:52:21.972652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.202 [2024-07-26 03:52:21.972682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:07.202 [2024-07-26 03:52:21.972696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.630 ms 00:24:07.202 [2024-07-26 03:52:21.972708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.202 [2024-07-26 03:52:22.003703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.202 [2024-07-26 03:52:22.003755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:07.202 [2024-07-26 03:52:22.003779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.938 ms 00:24:07.202 [2024-07-26 03:52:22.003792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.202 [2024-07-26 03:52:22.021441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.202 [2024-07-26 03:52:22.021489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:07.202 [2024-07-26 03:52:22.021507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.553 ms 00:24:07.202 [2024-07-26 03:52:22.021519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.202 [2024-07-26 03:52:22.021698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.202 [2024-07-26 03:52:22.021720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:07.202 [2024-07-26 03:52:22.021734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:24:07.202 [2024-07-26 03:52:22.021745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.202 [2024-07-26 03:52:22.053056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.202 [2024-07-26 03:52:22.053104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:07.202 [2024-07-26 03:52:22.053122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.286 ms 00:24:07.202 [2024-07-26 03:52:22.053134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.202 [2024-07-26 03:52:22.083915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.202 [2024-07-26 03:52:22.083960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:07.202 [2024-07-26 03:52:22.083978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.713 ms 00:24:07.202 [2024-07-26 03:52:22.083989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.461 [2024-07-26 03:52:22.114619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.461 [2024-07-26 03:52:22.114685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:07.461 [2024-07-26 03:52:22.114704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.559 ms 00:24:07.461 [2024-07-26 03:52:22.114716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.461 [2024-07-26 03:52:22.145386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.461 [2024-07-26 03:52:22.145434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:07.461 [2024-07-26 03:52:22.145452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.540 ms 00:24:07.461 [2024-07-26 03:52:22.145464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.461 [2024-07-26 03:52:22.145532] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:07.461 [2024-07-26 03:52:22.145558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-26 03:52:22.145582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-26 03:52:22.145594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-26 03:52:22.145606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-26 03:52:22.145619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-26 03:52:22.145631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-26 03:52:22.145643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-26 03:52:22.145655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-26 03:52:22.145667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-26 03:52:22.145680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-26 03:52:22.145692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-26 03:52:22.145704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-26 03:52:22.145716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-26 03:52:22.145728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-26 03:52:22.145739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-26 03:52:22.145751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-26 03:52:22.145763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-26 03:52:22.145775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-26 03:52:22.145788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-26 03:52:22.145800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-26 03:52:22.145811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-26 03:52:22.145841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-26 03:52:22.145855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-26 03:52:22.145867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-26 03:52:22.145879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-26 03:52:22.145916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-26 03:52:22.145929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-26 03:52:22.145942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-26 03:52:22.145954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-26 03:52:22.145966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-26 03:52:22.145978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-26 03:52:22.145990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-26 03:52:22.146002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-26 03:52:22.146016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:07.462 [2024-07-26 03:52:22.146845] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:07.462 [2024-07-26 03:52:22.146858] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f7b734ba-648f-487f-8a7d-88c9b2c09583 00:24:07.462 [2024-07-26 03:52:22.146870] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:07.462 [2024-07-26 03:52:22.146881] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:07.462 [2024-07-26 03:52:22.146892] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:07.462 [2024-07-26 03:52:22.146917] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:07.462 [2024-07-26 03:52:22.146928] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:07.462 [2024-07-26 03:52:22.146940] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:07.462 [2024-07-26 03:52:22.146951] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:07.462 [2024-07-26 03:52:22.146961] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:07.462 [2024-07-26 03:52:22.146971] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:07.462 [2024-07-26 03:52:22.146983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.462 [2024-07-26 03:52:22.146994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:07.462 [2024-07-26 03:52:22.147008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.453 ms 00:24:07.462 [2024-07-26 03:52:22.147024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.462 [2024-07-26 03:52:22.164113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.462 [2024-07-26 03:52:22.164158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:07.462 [2024-07-26 03:52:22.164176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.060 ms 00:24:07.462 [2024-07-26 03:52:22.164188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.462 [2024-07-26 03:52:22.164642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.462 [2024-07-26 03:52:22.164675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:07.462 [2024-07-26 03:52:22.164699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:24:07.462 [2024-07-26 03:52:22.164711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.462 [2024-07-26 03:52:22.204693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:07.462 [2024-07-26 03:52:22.204756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:07.462 [2024-07-26 03:52:22.204776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:07.463 [2024-07-26 03:52:22.204788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.463 [2024-07-26 03:52:22.204954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:07.463 [2024-07-26 03:52:22.204976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:07.463 [2024-07-26 03:52:22.204995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:07.463 [2024-07-26 03:52:22.205007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.463 [2024-07-26 03:52:22.205074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:07.463 [2024-07-26 03:52:22.205098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:07.463 [2024-07-26 03:52:22.205111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:07.463 [2024-07-26 03:52:22.205128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.463 [2024-07-26 03:52:22.205165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:07.463 [2024-07-26 03:52:22.205191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:07.463 [2024-07-26 03:52:22.205207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:07.463 [2024-07-26 03:52:22.205225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.463 [2024-07-26 03:52:22.305049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:07.463 [2024-07-26 03:52:22.305127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:07.463 [2024-07-26 03:52:22.305148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:07.463 [2024-07-26 03:52:22.305161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.722 [2024-07-26 03:52:22.392053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:07.722 [2024-07-26 03:52:22.392130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:07.722 [2024-07-26 03:52:22.392163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:07.722 [2024-07-26 03:52:22.392175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.722 [2024-07-26 03:52:22.392268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:07.722 [2024-07-26 03:52:22.392288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:07.722 [2024-07-26 03:52:22.392301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:07.722 [2024-07-26 03:52:22.392313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.722 [2024-07-26 03:52:22.392353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:07.722 [2024-07-26 03:52:22.392371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:07.722 [2024-07-26 03:52:22.392383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:07.722 [2024-07-26 03:52:22.392394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.722 [2024-07-26 03:52:22.392533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:07.722 [2024-07-26 03:52:22.392559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:07.722 [2024-07-26 03:52:22.392576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:07.722 [2024-07-26 03:52:22.392592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.722 [2024-07-26 03:52:22.392655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:07.722 [2024-07-26 03:52:22.392679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:07.722 [2024-07-26 03:52:22.392696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:07.722 [2024-07-26 03:52:22.392711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.722 [2024-07-26 03:52:22.392779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:07.722 [2024-07-26 03:52:22.392800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:07.722 [2024-07-26 03:52:22.392850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:07.722 [2024-07-26 03:52:22.392872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.722 [2024-07-26 03:52:22.392955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:07.722 [2024-07-26 03:52:22.392977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:07.722 [2024-07-26 03:52:22.392994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:07.722 [2024-07-26 03:52:22.393009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.722 [2024-07-26 03:52:22.393240] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 440.741 ms, result 0 00:24:08.657 00:24:08.657 00:24:08.657 03:52:23 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=81402 00:24:08.657 03:52:23 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:24:08.657 03:52:23 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 81402 00:24:08.657 03:52:23 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 81402 ']' 00:24:08.657 03:52:23 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.657 03:52:23 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:08.657 03:52:23 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.657 03:52:23 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:08.657 03:52:23 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:24:08.914 [2024-07-26 03:52:23.603307] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:24:08.914 [2024-07-26 03:52:23.603491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81402 ] 00:24:08.914 [2024-07-26 03:52:23.777802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.172 [2024-07-26 03:52:24.013506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.104 03:52:24 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:10.104 03:52:24 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:24:10.104 03:52:24 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:24:10.363 [2024-07-26 03:52:25.008236] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:10.363 [2024-07-26 03:52:25.008327] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:10.363 [2024-07-26 03:52:25.186247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.363 [2024-07-26 03:52:25.186314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:10.363 [2024-07-26 03:52:25.186338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:10.363 [2024-07-26 03:52:25.186355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.363 [2024-07-26 03:52:25.189539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.363 [2024-07-26 03:52:25.189591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:10.363 [2024-07-26 03:52:25.189610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.139 ms 00:24:10.363 [2024-07-26 03:52:25.189626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.363 [2024-07-26 03:52:25.189752] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:10.363 [2024-07-26 03:52:25.190792] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:10.363 [2024-07-26 03:52:25.190855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.363 [2024-07-26 03:52:25.190876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:10.363 [2024-07-26 03:52:25.190891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.113 ms 00:24:10.363 [2024-07-26 03:52:25.190910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.363 [2024-07-26 03:52:25.192203] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:10.363 [2024-07-26 03:52:25.208506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.363 [2024-07-26 03:52:25.208554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:10.363 [2024-07-26 03:52:25.208578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.298 ms 00:24:10.363 [2024-07-26 03:52:25.208592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.363 [2024-07-26 03:52:25.208718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.363 [2024-07-26 03:52:25.208743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:10.363 [2024-07-26 03:52:25.208761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:10.363 [2024-07-26 03:52:25.208775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.363 [2024-07-26 03:52:25.213367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.363 [2024-07-26 03:52:25.213415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:10.363 [2024-07-26 03:52:25.213452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.497 ms 00:24:10.363 [2024-07-26 03:52:25.213466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.363 [2024-07-26 03:52:25.213631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.363 [2024-07-26 03:52:25.213655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:10.363 [2024-07-26 03:52:25.213673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:24:10.363 [2024-07-26 03:52:25.213691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.363 [2024-07-26 03:52:25.213736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.363 [2024-07-26 03:52:25.213753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:10.363 [2024-07-26 03:52:25.213769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:10.363 [2024-07-26 03:52:25.213782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.363 [2024-07-26 03:52:25.213838] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:10.363 [2024-07-26 03:52:25.218126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.363 [2024-07-26 03:52:25.218172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:10.363 [2024-07-26 03:52:25.218192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.317 ms 00:24:10.363 [2024-07-26 03:52:25.218207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.363 [2024-07-26 03:52:25.218278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.363 [2024-07-26 03:52:25.218305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:10.363 [2024-07-26 03:52:25.218323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:10.363 [2024-07-26 03:52:25.218339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.363 [2024-07-26 03:52:25.218369] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:10.363 [2024-07-26 03:52:25.218401] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:10.363 [2024-07-26 03:52:25.218454] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:10.363 [2024-07-26 03:52:25.218498] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:24:10.363 [2024-07-26 03:52:25.218620] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:10.363 [2024-07-26 03:52:25.218657] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:10.363 [2024-07-26 03:52:25.218674] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:24:10.363 [2024-07-26 03:52:25.218693] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:10.363 [2024-07-26 03:52:25.218709] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:10.363 [2024-07-26 03:52:25.218725] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:10.363 [2024-07-26 03:52:25.218737] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:10.363 [2024-07-26 03:52:25.218752] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:10.363 [2024-07-26 03:52:25.218765] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:10.363 [2024-07-26 03:52:25.218783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.363 [2024-07-26 03:52:25.218796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:10.363 [2024-07-26 03:52:25.218813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:24:10.363 [2024-07-26 03:52:25.218846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.364 [2024-07-26 03:52:25.218976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.364 [2024-07-26 03:52:25.218996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:10.364 [2024-07-26 03:52:25.219013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:10.364 [2024-07-26 03:52:25.219026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.364 [2024-07-26 03:52:25.219151] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:10.364 [2024-07-26 03:52:25.219173] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:10.364 [2024-07-26 03:52:25.219190] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:10.364 [2024-07-26 03:52:25.219204] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:10.364 [2024-07-26 03:52:25.219225] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:10.364 [2024-07-26 03:52:25.219238] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:10.364 [2024-07-26 03:52:25.219253] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:10.364 [2024-07-26 03:52:25.219266] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:10.364 [2024-07-26 03:52:25.219284] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:10.364 [2024-07-26 03:52:25.219297] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:10.364 [2024-07-26 03:52:25.219311] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:10.364 [2024-07-26 03:52:25.219323] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:10.364 [2024-07-26 03:52:25.219337] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:10.364 [2024-07-26 03:52:25.219350] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:10.364 [2024-07-26 03:52:25.219365] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:10.364 [2024-07-26 03:52:25.219377] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:10.364 [2024-07-26 03:52:25.219391] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:10.364 [2024-07-26 03:52:25.219403] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:10.364 [2024-07-26 03:52:25.219418] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:10.364 [2024-07-26 03:52:25.219431] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:10.364 [2024-07-26 03:52:25.219446] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:10.364 [2024-07-26 03:52:25.219458] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:10.364 [2024-07-26 03:52:25.219472] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:10.364 [2024-07-26 03:52:25.219484] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:10.364 [2024-07-26 03:52:25.219500] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:10.364 [2024-07-26 03:52:25.219512] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:10.364 [2024-07-26 03:52:25.219527] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:10.364 [2024-07-26 03:52:25.219550] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:10.364 [2024-07-26 03:52:25.219567] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:10.364 [2024-07-26 03:52:25.219580] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:10.364 [2024-07-26 03:52:25.219594] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:10.364 [2024-07-26 03:52:25.219606] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:10.364 [2024-07-26 03:52:25.219620] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:10.364 [2024-07-26 03:52:25.219632] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:10.364 [2024-07-26 03:52:25.219646] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:10.364 [2024-07-26 03:52:25.219659] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:10.364 [2024-07-26 03:52:25.219673] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:10.364 [2024-07-26 03:52:25.219685] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:10.364 [2024-07-26 03:52:25.219699] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:10.364 [2024-07-26 03:52:25.219711] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:10.364 [2024-07-26 03:52:25.219728] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:10.364 [2024-07-26 03:52:25.219741] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:10.364 [2024-07-26 03:52:25.219756] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:10.364 [2024-07-26 03:52:25.219768] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:10.364 [2024-07-26 03:52:25.219783] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:10.364 [2024-07-26 03:52:25.219796] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:10.364 [2024-07-26 03:52:25.219811] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:10.364 [2024-07-26 03:52:25.219842] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:10.364 [2024-07-26 03:52:25.219858] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:10.364 [2024-07-26 03:52:25.219878] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:10.364 [2024-07-26 03:52:25.219894] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:10.364 [2024-07-26 03:52:25.219917] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:10.364 [2024-07-26 03:52:25.219932] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:10.364 [2024-07-26 03:52:25.219946] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:10.364 [2024-07-26 03:52:25.219964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:10.364 [2024-07-26 03:52:25.219979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:10.364 [2024-07-26 03:52:25.219998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:10.364 [2024-07-26 03:52:25.220012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:10.364 [2024-07-26 03:52:25.220027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:10.364 [2024-07-26 03:52:25.220041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:10.364 [2024-07-26 03:52:25.220056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:10.364 [2024-07-26 03:52:25.220069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:10.364 [2024-07-26 03:52:25.220084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:10.364 [2024-07-26 03:52:25.220097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:10.364 [2024-07-26 03:52:25.220112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:10.364 [2024-07-26 03:52:25.220125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:10.364 [2024-07-26 03:52:25.220141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:10.364 [2024-07-26 03:52:25.220154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:10.364 [2024-07-26 03:52:25.220169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:10.364 [2024-07-26 03:52:25.220182] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:10.364 [2024-07-26 03:52:25.220199] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:10.364 [2024-07-26 03:52:25.220213] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:10.364 [2024-07-26 03:52:25.220231] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:10.364 [2024-07-26 03:52:25.220246] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:10.364 [2024-07-26 03:52:25.220262] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:10.364 [2024-07-26 03:52:25.220276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.364 [2024-07-26 03:52:25.220294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:10.364 [2024-07-26 03:52:25.220307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.195 ms 00:24:10.364 [2024-07-26 03:52:25.220326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.364 [2024-07-26 03:52:25.253563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.364 [2024-07-26 03:52:25.253637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:10.364 [2024-07-26 03:52:25.253660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.156 ms 00:24:10.364 [2024-07-26 03:52:25.253677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.364 [2024-07-26 03:52:25.253886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.364 [2024-07-26 03:52:25.253915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:10.364 [2024-07-26 03:52:25.253931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:24:10.364 [2024-07-26 03:52:25.253946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.623 [2024-07-26 03:52:25.292737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.623 [2024-07-26 03:52:25.292803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:10.623 [2024-07-26 03:52:25.292845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.756 ms 00:24:10.623 [2024-07-26 03:52:25.292865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.623 [2024-07-26 03:52:25.293006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.623 [2024-07-26 03:52:25.293033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:10.623 [2024-07-26 03:52:25.293049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:10.623 [2024-07-26 03:52:25.293065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.623 [2024-07-26 03:52:25.293409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.623 [2024-07-26 03:52:25.293441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:10.623 [2024-07-26 03:52:25.293457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:24:10.623 [2024-07-26 03:52:25.293472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.623 [2024-07-26 03:52:25.293629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.623 [2024-07-26 03:52:25.293653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:10.623 [2024-07-26 03:52:25.293668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:24:10.623 [2024-07-26 03:52:25.293694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.624 [2024-07-26 03:52:25.311693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.624 [2024-07-26 03:52:25.311755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:10.624 [2024-07-26 03:52:25.311778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.969 ms 00:24:10.624 [2024-07-26 03:52:25.311794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.624 [2024-07-26 03:52:25.328229] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:24:10.624 [2024-07-26 03:52:25.328283] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:10.624 [2024-07-26 03:52:25.328305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.624 [2024-07-26 03:52:25.328323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:10.624 [2024-07-26 03:52:25.328338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.324 ms 00:24:10.624 [2024-07-26 03:52:25.328353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.624 [2024-07-26 03:52:25.358433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.624 [2024-07-26 03:52:25.358487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:10.624 [2024-07-26 03:52:25.358507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.979 ms 00:24:10.624 [2024-07-26 03:52:25.358528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.624 [2024-07-26 03:52:25.374345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.624 [2024-07-26 03:52:25.374397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:10.624 [2024-07-26 03:52:25.374429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.702 ms 00:24:10.624 [2024-07-26 03:52:25.374448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.624 [2024-07-26 03:52:25.390066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.624 [2024-07-26 03:52:25.390117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:10.624 [2024-07-26 03:52:25.390136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.520 ms 00:24:10.624 [2024-07-26 03:52:25.390153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.624 [2024-07-26 03:52:25.391031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.624 [2024-07-26 03:52:25.391086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:10.624 [2024-07-26 03:52:25.391104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.743 ms 00:24:10.624 [2024-07-26 03:52:25.391120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.624 [2024-07-26 03:52:25.476009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.624 [2024-07-26 03:52:25.476089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:10.624 [2024-07-26 03:52:25.476114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.852 ms 00:24:10.624 [2024-07-26 03:52:25.476132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.624 [2024-07-26 03:52:25.489029] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:10.624 [2024-07-26 03:52:25.503240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.624 [2024-07-26 03:52:25.503317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:10.624 [2024-07-26 03:52:25.503347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.945 ms 00:24:10.624 [2024-07-26 03:52:25.503362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.624 [2024-07-26 03:52:25.503507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.624 [2024-07-26 03:52:25.503530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:10.624 [2024-07-26 03:52:25.503548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:10.624 [2024-07-26 03:52:25.503562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.624 [2024-07-26 03:52:25.503633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.624 [2024-07-26 03:52:25.503655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:10.624 [2024-07-26 03:52:25.503671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:24:10.624 [2024-07-26 03:52:25.503685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.624 [2024-07-26 03:52:25.503722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.624 [2024-07-26 03:52:25.503740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:10.624 [2024-07-26 03:52:25.503756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:10.624 [2024-07-26 03:52:25.503769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.624 [2024-07-26 03:52:25.503844] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:10.624 [2024-07-26 03:52:25.503866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.624 [2024-07-26 03:52:25.503885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:10.624 [2024-07-26 03:52:25.503902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:24:10.624 [2024-07-26 03:52:25.503917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.917 [2024-07-26 03:52:25.535440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.917 [2024-07-26 03:52:25.535500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:10.917 [2024-07-26 03:52:25.535523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.487 ms 00:24:10.917 [2024-07-26 03:52:25.535540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.917 [2024-07-26 03:52:25.535685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.917 [2024-07-26 03:52:25.535717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:10.917 [2024-07-26 03:52:25.535734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:10.917 [2024-07-26 03:52:25.535749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.917 [2024-07-26 03:52:25.536853] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:10.917 [2024-07-26 03:52:25.541106] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 350.197 ms, result 0 00:24:10.917 [2024-07-26 03:52:25.542083] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:10.917 Some configs were skipped because the RPC state that can call them passed over. 00:24:10.917 03:52:25 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:24:11.198 [2024-07-26 03:52:25.848119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.198 [2024-07-26 03:52:25.848385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:24:11.198 [2024-07-26 03:52:25.848555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.428 ms 00:24:11.198 [2024-07-26 03:52:25.848725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.198 [2024-07-26 03:52:25.848883] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.190 ms, result 0 00:24:11.198 true 00:24:11.198 03:52:25 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:24:11.456 [2024-07-26 03:52:26.120008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.456 [2024-07-26 03:52:26.120276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:24:11.456 [2024-07-26 03:52:26.120430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.960 ms 00:24:11.456 [2024-07-26 03:52:26.120509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.456 [2024-07-26 03:52:26.120709] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.654 ms, result 0 00:24:11.456 true 00:24:11.456 03:52:26 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 81402 00:24:11.456 03:52:26 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 81402 ']' 00:24:11.456 03:52:26 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 81402 00:24:11.456 03:52:26 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:24:11.456 03:52:26 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:11.456 03:52:26 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81402 00:24:11.456 killing process with pid 81402 00:24:11.456 03:52:26 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:11.456 03:52:26 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:11.456 03:52:26 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81402' 00:24:11.456 03:52:26 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 81402 00:24:11.456 03:52:26 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 81402 00:24:12.391 [2024-07-26 03:52:27.113269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.391 [2024-07-26 03:52:27.113345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:12.391 [2024-07-26 03:52:27.113372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:12.391 [2024-07-26 03:52:27.113390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.391 [2024-07-26 03:52:27.113428] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:12.391 [2024-07-26 03:52:27.116801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.391 [2024-07-26 03:52:27.116856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:12.391 [2024-07-26 03:52:27.116876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.347 ms 00:24:12.391 [2024-07-26 03:52:27.116895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.391 [2024-07-26 03:52:27.117223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.391 [2024-07-26 03:52:27.117262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:12.391 [2024-07-26 03:52:27.117280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:24:12.391 [2024-07-26 03:52:27.117295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.391 [2024-07-26 03:52:27.121356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.391 [2024-07-26 03:52:27.121415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:12.391 [2024-07-26 03:52:27.121435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.033 ms 00:24:12.391 [2024-07-26 03:52:27.121452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.391 [2024-07-26 03:52:27.129070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.391 [2024-07-26 03:52:27.129117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:12.391 [2024-07-26 03:52:27.129135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.566 ms 00:24:12.391 [2024-07-26 03:52:27.129154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.391 [2024-07-26 03:52:27.141578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.391 [2024-07-26 03:52:27.141630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:12.391 [2024-07-26 03:52:27.141651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.354 ms 00:24:12.391 [2024-07-26 03:52:27.141670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.391 [2024-07-26 03:52:27.150208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.391 [2024-07-26 03:52:27.150262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:12.391 [2024-07-26 03:52:27.150283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.487 ms 00:24:12.391 [2024-07-26 03:52:27.150299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.391 [2024-07-26 03:52:27.150462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.391 [2024-07-26 03:52:27.150489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:12.391 [2024-07-26 03:52:27.150513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:24:12.391 [2024-07-26 03:52:27.150543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.391 [2024-07-26 03:52:27.163432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.391 [2024-07-26 03:52:27.163485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:12.391 [2024-07-26 03:52:27.163506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.861 ms 00:24:12.391 [2024-07-26 03:52:27.163522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.391 [2024-07-26 03:52:27.176075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.391 [2024-07-26 03:52:27.176126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:12.391 [2024-07-26 03:52:27.176146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.501 ms 00:24:12.391 [2024-07-26 03:52:27.176167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.391 [2024-07-26 03:52:27.188402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.391 [2024-07-26 03:52:27.188453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:12.391 [2024-07-26 03:52:27.188473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.184 ms 00:24:12.391 [2024-07-26 03:52:27.188489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.391 [2024-07-26 03:52:27.200723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.391 [2024-07-26 03:52:27.200777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:12.391 [2024-07-26 03:52:27.200799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.155 ms 00:24:12.391 [2024-07-26 03:52:27.200831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.391 [2024-07-26 03:52:27.200884] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:12.391 [2024-07-26 03:52:27.200920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:12.391 [2024-07-26 03:52:27.200938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:12.391 [2024-07-26 03:52:27.200954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:12.391 [2024-07-26 03:52:27.200969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:12.391 [2024-07-26 03:52:27.200984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:12.391 [2024-07-26 03:52:27.200998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:12.391 [2024-07-26 03:52:27.201016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:12.391 [2024-07-26 03:52:27.201030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:12.391 [2024-07-26 03:52:27.201047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:12.391 [2024-07-26 03:52:27.201061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:12.391 [2024-07-26 03:52:27.201077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:12.391 [2024-07-26 03:52:27.201091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:12.391 [2024-07-26 03:52:27.201106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:12.391 [2024-07-26 03:52:27.201120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:12.391 [2024-07-26 03:52:27.201146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:12.391 [2024-07-26 03:52:27.201163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:12.391 [2024-07-26 03:52:27.201189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.201996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:12.392 [2024-07-26 03:52:27.202810] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:12.392 [2024-07-26 03:52:27.202852] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f7b734ba-648f-487f-8a7d-88c9b2c09583 00:24:12.392 [2024-07-26 03:52:27.202877] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:12.392 [2024-07-26 03:52:27.202893] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:12.392 [2024-07-26 03:52:27.202916] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:12.392 [2024-07-26 03:52:27.202932] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:12.393 [2024-07-26 03:52:27.202946] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:12.393 [2024-07-26 03:52:27.202960] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:12.393 [2024-07-26 03:52:27.202976] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:12.393 [2024-07-26 03:52:27.202988] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:12.393 [2024-07-26 03:52:27.203024] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:12.393 [2024-07-26 03:52:27.203048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.393 [2024-07-26 03:52:27.203067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:12.393 [2024-07-26 03:52:27.203084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.166 ms 00:24:12.393 [2024-07-26 03:52:27.203100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.393 [2024-07-26 03:52:27.219773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.393 [2024-07-26 03:52:27.219849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:12.393 [2024-07-26 03:52:27.219872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.623 ms 00:24:12.393 [2024-07-26 03:52:27.219891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.393 [2024-07-26 03:52:27.220431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.393 [2024-07-26 03:52:27.220483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:12.393 [2024-07-26 03:52:27.220510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:24:12.393 [2024-07-26 03:52:27.220529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.393 [2024-07-26 03:52:27.275707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:12.393 [2024-07-26 03:52:27.275774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:12.393 [2024-07-26 03:52:27.275796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:12.393 [2024-07-26 03:52:27.275813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.393 [2024-07-26 03:52:27.275977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:12.393 [2024-07-26 03:52:27.276007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:12.393 [2024-07-26 03:52:27.276021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:12.393 [2024-07-26 03:52:27.276037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.393 [2024-07-26 03:52:27.276105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:12.393 [2024-07-26 03:52:27.276130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:12.393 [2024-07-26 03:52:27.276146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:12.393 [2024-07-26 03:52:27.276174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.393 [2024-07-26 03:52:27.276218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:12.393 [2024-07-26 03:52:27.276243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:12.393 [2024-07-26 03:52:27.276260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:12.393 [2024-07-26 03:52:27.276277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.651 [2024-07-26 03:52:27.375005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:12.651 [2024-07-26 03:52:27.375083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:12.651 [2024-07-26 03:52:27.375105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:12.651 [2024-07-26 03:52:27.375122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.651 [2024-07-26 03:52:27.459290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:12.651 [2024-07-26 03:52:27.459375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:12.651 [2024-07-26 03:52:27.459397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:12.651 [2024-07-26 03:52:27.459414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.651 [2024-07-26 03:52:27.459526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:12.651 [2024-07-26 03:52:27.459552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:12.651 [2024-07-26 03:52:27.459567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:12.651 [2024-07-26 03:52:27.459586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.651 [2024-07-26 03:52:27.459625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:12.651 [2024-07-26 03:52:27.459645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:12.651 [2024-07-26 03:52:27.459660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:12.651 [2024-07-26 03:52:27.459679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.651 [2024-07-26 03:52:27.459804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:12.651 [2024-07-26 03:52:27.459860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:12.651 [2024-07-26 03:52:27.459877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:12.651 [2024-07-26 03:52:27.459893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.651 [2024-07-26 03:52:27.459951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:12.651 [2024-07-26 03:52:27.459975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:12.651 [2024-07-26 03:52:27.459990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:12.651 [2024-07-26 03:52:27.460004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.651 [2024-07-26 03:52:27.460057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:12.651 [2024-07-26 03:52:27.460079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:12.651 [2024-07-26 03:52:27.460093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:12.651 [2024-07-26 03:52:27.460111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.651 [2024-07-26 03:52:27.460168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:12.651 [2024-07-26 03:52:27.460192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:12.651 [2024-07-26 03:52:27.460214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:12.651 [2024-07-26 03:52:27.460245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.651 [2024-07-26 03:52:27.460429] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 347.146 ms, result 0 00:24:13.586 03:52:28 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:24:13.586 03:52:28 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:13.844 [2024-07-26 03:52:28.516315] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:24:13.844 [2024-07-26 03:52:28.516500] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81467 ] 00:24:13.844 [2024-07-26 03:52:28.687778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.103 [2024-07-26 03:52:28.916486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:14.362 [2024-07-26 03:52:29.255588] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:14.362 [2024-07-26 03:52:29.255667] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:14.622 [2024-07-26 03:52:29.416576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.622 [2024-07-26 03:52:29.416643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:14.622 [2024-07-26 03:52:29.416666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:14.622 [2024-07-26 03:52:29.416678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.622 [2024-07-26 03:52:29.419836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.622 [2024-07-26 03:52:29.419881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:14.622 [2024-07-26 03:52:29.419899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.128 ms 00:24:14.622 [2024-07-26 03:52:29.419910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.622 [2024-07-26 03:52:29.420033] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:14.622 [2024-07-26 03:52:29.420986] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:14.622 [2024-07-26 03:52:29.421031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.622 [2024-07-26 03:52:29.421046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:14.622 [2024-07-26 03:52:29.421059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.008 ms 00:24:14.622 [2024-07-26 03:52:29.421070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.622 [2024-07-26 03:52:29.422283] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:14.622 [2024-07-26 03:52:29.438367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.622 [2024-07-26 03:52:29.438414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:14.622 [2024-07-26 03:52:29.438440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.085 ms 00:24:14.622 [2024-07-26 03:52:29.438452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.622 [2024-07-26 03:52:29.438575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.622 [2024-07-26 03:52:29.438609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:14.622 [2024-07-26 03:52:29.438623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:24:14.622 [2024-07-26 03:52:29.438645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.622 [2024-07-26 03:52:29.442918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.622 [2024-07-26 03:52:29.442963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:14.622 [2024-07-26 03:52:29.442980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.213 ms 00:24:14.622 [2024-07-26 03:52:29.442991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.622 [2024-07-26 03:52:29.443120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.622 [2024-07-26 03:52:29.443141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:14.623 [2024-07-26 03:52:29.443155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:24:14.623 [2024-07-26 03:52:29.443165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.623 [2024-07-26 03:52:29.443207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.623 [2024-07-26 03:52:29.443223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:14.623 [2024-07-26 03:52:29.443239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:14.623 [2024-07-26 03:52:29.443250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.623 [2024-07-26 03:52:29.443281] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:14.623 [2024-07-26 03:52:29.447505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.623 [2024-07-26 03:52:29.447544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:14.623 [2024-07-26 03:52:29.447561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.232 ms 00:24:14.623 [2024-07-26 03:52:29.447572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.623 [2024-07-26 03:52:29.447642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.623 [2024-07-26 03:52:29.447661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:14.623 [2024-07-26 03:52:29.447674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:14.623 [2024-07-26 03:52:29.447685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.623 [2024-07-26 03:52:29.447715] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:14.623 [2024-07-26 03:52:29.447744] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:14.623 [2024-07-26 03:52:29.447791] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:14.623 [2024-07-26 03:52:29.447812] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:24:14.623 [2024-07-26 03:52:29.447935] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:14.623 [2024-07-26 03:52:29.447952] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:14.623 [2024-07-26 03:52:29.447966] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:24:14.623 [2024-07-26 03:52:29.447981] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:14.623 [2024-07-26 03:52:29.447995] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:14.623 [2024-07-26 03:52:29.448012] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:14.623 [2024-07-26 03:52:29.448023] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:14.623 [2024-07-26 03:52:29.448034] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:14.623 [2024-07-26 03:52:29.448045] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:14.623 [2024-07-26 03:52:29.448056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.623 [2024-07-26 03:52:29.448068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:14.623 [2024-07-26 03:52:29.448079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:24:14.623 [2024-07-26 03:52:29.448090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.623 [2024-07-26 03:52:29.448187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.623 [2024-07-26 03:52:29.448202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:14.623 [2024-07-26 03:52:29.448219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:14.623 [2024-07-26 03:52:29.448230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.623 [2024-07-26 03:52:29.448338] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:14.623 [2024-07-26 03:52:29.448354] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:14.623 [2024-07-26 03:52:29.448367] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:14.623 [2024-07-26 03:52:29.448378] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:14.623 [2024-07-26 03:52:29.448390] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:14.623 [2024-07-26 03:52:29.448400] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:14.623 [2024-07-26 03:52:29.448410] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:14.623 [2024-07-26 03:52:29.448420] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:14.623 [2024-07-26 03:52:29.448431] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:14.623 [2024-07-26 03:52:29.448441] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:14.623 [2024-07-26 03:52:29.448451] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:14.623 [2024-07-26 03:52:29.448461] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:14.623 [2024-07-26 03:52:29.448470] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:14.623 [2024-07-26 03:52:29.448480] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:14.623 [2024-07-26 03:52:29.448490] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:14.623 [2024-07-26 03:52:29.448503] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:14.623 [2024-07-26 03:52:29.448514] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:14.623 [2024-07-26 03:52:29.448524] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:14.623 [2024-07-26 03:52:29.448547] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:14.623 [2024-07-26 03:52:29.448558] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:14.623 [2024-07-26 03:52:29.448568] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:14.623 [2024-07-26 03:52:29.448578] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:14.623 [2024-07-26 03:52:29.448588] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:14.623 [2024-07-26 03:52:29.448598] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:14.623 [2024-07-26 03:52:29.448608] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:14.623 [2024-07-26 03:52:29.448618] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:14.623 [2024-07-26 03:52:29.448628] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:14.623 [2024-07-26 03:52:29.448638] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:14.623 [2024-07-26 03:52:29.448648] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:14.623 [2024-07-26 03:52:29.448658] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:14.623 [2024-07-26 03:52:29.448668] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:14.623 [2024-07-26 03:52:29.448677] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:14.623 [2024-07-26 03:52:29.448688] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:14.623 [2024-07-26 03:52:29.448697] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:14.623 [2024-07-26 03:52:29.448708] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:14.623 [2024-07-26 03:52:29.448718] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:14.623 [2024-07-26 03:52:29.448728] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:14.623 [2024-07-26 03:52:29.448737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:14.623 [2024-07-26 03:52:29.448747] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:14.623 [2024-07-26 03:52:29.448757] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:14.623 [2024-07-26 03:52:29.448767] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:14.623 [2024-07-26 03:52:29.448777] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:14.623 [2024-07-26 03:52:29.448787] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:14.623 [2024-07-26 03:52:29.448797] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:14.623 [2024-07-26 03:52:29.448808] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:14.623 [2024-07-26 03:52:29.448835] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:14.623 [2024-07-26 03:52:29.448847] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:14.623 [2024-07-26 03:52:29.448864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:14.623 [2024-07-26 03:52:29.448875] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:14.623 [2024-07-26 03:52:29.448885] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:14.623 [2024-07-26 03:52:29.448896] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:14.623 [2024-07-26 03:52:29.448906] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:14.623 [2024-07-26 03:52:29.448916] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:14.623 [2024-07-26 03:52:29.448927] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:14.623 [2024-07-26 03:52:29.448941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:14.623 [2024-07-26 03:52:29.448954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:14.623 [2024-07-26 03:52:29.448966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:14.623 [2024-07-26 03:52:29.448977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:14.623 [2024-07-26 03:52:29.448987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:14.623 [2024-07-26 03:52:29.448999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:14.623 [2024-07-26 03:52:29.449010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:14.623 [2024-07-26 03:52:29.449021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:14.623 [2024-07-26 03:52:29.449033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:14.623 [2024-07-26 03:52:29.449044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:14.624 [2024-07-26 03:52:29.449055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:14.624 [2024-07-26 03:52:29.449066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:14.624 [2024-07-26 03:52:29.449078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:14.624 [2024-07-26 03:52:29.449089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:14.624 [2024-07-26 03:52:29.449100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:14.624 [2024-07-26 03:52:29.449111] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:14.624 [2024-07-26 03:52:29.449123] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:14.624 [2024-07-26 03:52:29.449135] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:14.624 [2024-07-26 03:52:29.449147] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:14.624 [2024-07-26 03:52:29.449158] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:14.624 [2024-07-26 03:52:29.449169] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:14.624 [2024-07-26 03:52:29.449181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.624 [2024-07-26 03:52:29.449193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:14.624 [2024-07-26 03:52:29.449205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.910 ms 00:24:14.624 [2024-07-26 03:52:29.449216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.624 [2024-07-26 03:52:29.496355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.624 [2024-07-26 03:52:29.496422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:14.624 [2024-07-26 03:52:29.496450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.040 ms 00:24:14.624 [2024-07-26 03:52:29.496462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.624 [2024-07-26 03:52:29.496673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.624 [2024-07-26 03:52:29.496695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:14.624 [2024-07-26 03:52:29.496715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:24:14.624 [2024-07-26 03:52:29.496726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.883 [2024-07-26 03:52:29.553566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.883 [2024-07-26 03:52:29.553660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:14.883 [2024-07-26 03:52:29.553696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.802 ms 00:24:14.883 [2024-07-26 03:52:29.553719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.883 [2024-07-26 03:52:29.553979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.883 [2024-07-26 03:52:29.554013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:14.883 [2024-07-26 03:52:29.554039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:14.883 [2024-07-26 03:52:29.554060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.883 [2024-07-26 03:52:29.554457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.883 [2024-07-26 03:52:29.554504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:14.883 [2024-07-26 03:52:29.554530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:24:14.883 [2024-07-26 03:52:29.554552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.883 [2024-07-26 03:52:29.554810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.883 [2024-07-26 03:52:29.554862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:14.883 [2024-07-26 03:52:29.554888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:24:14.883 [2024-07-26 03:52:29.554908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.883 [2024-07-26 03:52:29.575195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.883 [2024-07-26 03:52:29.575248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:14.883 [2024-07-26 03:52:29.575268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.236 ms 00:24:14.883 [2024-07-26 03:52:29.575280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.883 [2024-07-26 03:52:29.591621] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:24:14.883 [2024-07-26 03:52:29.591674] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:14.883 [2024-07-26 03:52:29.591695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.883 [2024-07-26 03:52:29.591708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:14.883 [2024-07-26 03:52:29.591723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.237 ms 00:24:14.883 [2024-07-26 03:52:29.591734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.883 [2024-07-26 03:52:29.621654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.883 [2024-07-26 03:52:29.621713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:14.883 [2024-07-26 03:52:29.621733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.782 ms 00:24:14.883 [2024-07-26 03:52:29.621745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.883 [2024-07-26 03:52:29.637569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.883 [2024-07-26 03:52:29.637616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:14.883 [2024-07-26 03:52:29.637634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.675 ms 00:24:14.883 [2024-07-26 03:52:29.637645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.883 [2024-07-26 03:52:29.653075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.883 [2024-07-26 03:52:29.653118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:14.883 [2024-07-26 03:52:29.653136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.328 ms 00:24:14.883 [2024-07-26 03:52:29.653147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.883 [2024-07-26 03:52:29.653975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.883 [2024-07-26 03:52:29.654010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:14.883 [2024-07-26 03:52:29.654026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.699 ms 00:24:14.883 [2024-07-26 03:52:29.654043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.883 [2024-07-26 03:52:29.726515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.883 [2024-07-26 03:52:29.726596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:14.883 [2024-07-26 03:52:29.726619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.413 ms 00:24:14.883 [2024-07-26 03:52:29.726631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.883 [2024-07-26 03:52:29.739277] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:14.883 [2024-07-26 03:52:29.753077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.883 [2024-07-26 03:52:29.753147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:14.883 [2024-07-26 03:52:29.753168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.284 ms 00:24:14.883 [2024-07-26 03:52:29.753180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.883 [2024-07-26 03:52:29.753331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.883 [2024-07-26 03:52:29.753353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:14.883 [2024-07-26 03:52:29.753366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:14.883 [2024-07-26 03:52:29.753378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.883 [2024-07-26 03:52:29.753444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.883 [2024-07-26 03:52:29.753461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:14.883 [2024-07-26 03:52:29.753473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:24:14.883 [2024-07-26 03:52:29.753484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.883 [2024-07-26 03:52:29.753517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.883 [2024-07-26 03:52:29.753537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:14.884 [2024-07-26 03:52:29.753549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:14.884 [2024-07-26 03:52:29.753560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.884 [2024-07-26 03:52:29.753597] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:14.884 [2024-07-26 03:52:29.753614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.884 [2024-07-26 03:52:29.753625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:14.884 [2024-07-26 03:52:29.753637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:24:14.884 [2024-07-26 03:52:29.753648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.884 [2024-07-26 03:52:29.784623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.884 [2024-07-26 03:52:29.784680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:14.884 [2024-07-26 03:52:29.784699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.944 ms 00:24:14.884 [2024-07-26 03:52:29.784712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.884 [2024-07-26 03:52:29.784889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.884 [2024-07-26 03:52:29.784911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:14.884 [2024-07-26 03:52:29.784925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:24:14.884 [2024-07-26 03:52:29.784936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.884 [2024-07-26 03:52:29.785877] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:15.142 [2024-07-26 03:52:29.789963] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 368.953 ms, result 0 00:24:15.142 [2024-07-26 03:52:29.790708] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:15.142 [2024-07-26 03:52:29.807058] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:25.009  Copying: 28/256 [MB] (28 MBps) Copying: 52/256 [MB] (24 MBps) Copying: 78/256 [MB] (25 MBps) Copying: 102/256 [MB] (24 MBps) Copying: 128/256 [MB] (25 MBps) Copying: 152/256 [MB] (24 MBps) Copying: 178/256 [MB] (25 MBps) Copying: 202/256 [MB] (24 MBps) Copying: 228/256 [MB] (25 MBps) Copying: 254/256 [MB] (26 MBps) Copying: 256/256 [MB] (average 25 MBps)[2024-07-26 03:52:39.858519] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:25.009 [2024-07-26 03:52:39.870801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.009 [2024-07-26 03:52:39.870859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:25.009 [2024-07-26 03:52:39.870882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:25.009 [2024-07-26 03:52:39.870894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.009 [2024-07-26 03:52:39.870935] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:25.009 [2024-07-26 03:52:39.874178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.009 [2024-07-26 03:52:39.874213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:25.009 [2024-07-26 03:52:39.874229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.221 ms 00:24:25.009 [2024-07-26 03:52:39.874240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.009 [2024-07-26 03:52:39.874523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.009 [2024-07-26 03:52:39.874546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:25.009 [2024-07-26 03:52:39.874560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:24:25.009 [2024-07-26 03:52:39.874571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.009 [2024-07-26 03:52:39.878369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.009 [2024-07-26 03:52:39.878404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:25.009 [2024-07-26 03:52:39.878427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.765 ms 00:24:25.009 [2024-07-26 03:52:39.878439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.009 [2024-07-26 03:52:39.885988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.009 [2024-07-26 03:52:39.886031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:25.009 [2024-07-26 03:52:39.886046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.522 ms 00:24:25.009 [2024-07-26 03:52:39.886058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.269 [2024-07-26 03:52:39.917169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.269 [2024-07-26 03:52:39.917225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:25.269 [2024-07-26 03:52:39.917244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.037 ms 00:24:25.269 [2024-07-26 03:52:39.917256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.269 [2024-07-26 03:52:39.935235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.269 [2024-07-26 03:52:39.935317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:25.269 [2024-07-26 03:52:39.935339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.919 ms 00:24:25.269 [2024-07-26 03:52:39.935366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.269 [2024-07-26 03:52:39.935589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.269 [2024-07-26 03:52:39.935611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:25.269 [2024-07-26 03:52:39.935625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:24:25.269 [2024-07-26 03:52:39.935636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.269 [2024-07-26 03:52:39.967756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.269 [2024-07-26 03:52:39.967807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:25.269 [2024-07-26 03:52:39.967842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.095 ms 00:24:25.269 [2024-07-26 03:52:39.967856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.269 [2024-07-26 03:52:39.999563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.269 [2024-07-26 03:52:39.999625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:25.269 [2024-07-26 03:52:39.999644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.649 ms 00:24:25.269 [2024-07-26 03:52:39.999656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.269 [2024-07-26 03:52:40.031879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.269 [2024-07-26 03:52:40.031954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:25.269 [2024-07-26 03:52:40.031976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.163 ms 00:24:25.269 [2024-07-26 03:52:40.031988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.269 [2024-07-26 03:52:40.063662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.269 [2024-07-26 03:52:40.063738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:25.269 [2024-07-26 03:52:40.063759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.550 ms 00:24:25.269 [2024-07-26 03:52:40.063770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.269 [2024-07-26 03:52:40.063853] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:25.269 [2024-07-26 03:52:40.063888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:25.269 [2024-07-26 03:52:40.063903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:25.269 [2024-07-26 03:52:40.063916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:25.269 [2024-07-26 03:52:40.063927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:25.269 [2024-07-26 03:52:40.063939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:25.269 [2024-07-26 03:52:40.063951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:25.269 [2024-07-26 03:52:40.063963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:25.269 [2024-07-26 03:52:40.063975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:25.269 [2024-07-26 03:52:40.063986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:25.269 [2024-07-26 03:52:40.063998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:25.269 [2024-07-26 03:52:40.064010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:25.269 [2024-07-26 03:52:40.064022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:25.269 [2024-07-26 03:52:40.064034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:25.269 [2024-07-26 03:52:40.064045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:25.269 [2024-07-26 03:52:40.064057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:25.269 [2024-07-26 03:52:40.064069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:25.269 [2024-07-26 03:52:40.064080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:25.269 [2024-07-26 03:52:40.064092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:25.269 [2024-07-26 03:52:40.064104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:25.269 [2024-07-26 03:52:40.064116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:25.269 [2024-07-26 03:52:40.064127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:25.269 [2024-07-26 03:52:40.064139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.064990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.065001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.065013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.065024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.065036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.065047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.065059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.065070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:25.270 [2024-07-26 03:52:40.065092] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:25.270 [2024-07-26 03:52:40.065103] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f7b734ba-648f-487f-8a7d-88c9b2c09583 00:24:25.270 [2024-07-26 03:52:40.065115] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:25.270 [2024-07-26 03:52:40.065126] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:25.270 [2024-07-26 03:52:40.065152] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:25.270 [2024-07-26 03:52:40.065163] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:25.270 [2024-07-26 03:52:40.065174] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:25.270 [2024-07-26 03:52:40.065185] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:25.270 [2024-07-26 03:52:40.065197] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:25.270 [2024-07-26 03:52:40.065207] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:25.270 [2024-07-26 03:52:40.065217] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:25.270 [2024-07-26 03:52:40.065228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.270 [2024-07-26 03:52:40.065239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:25.271 [2024-07-26 03:52:40.065255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.377 ms 00:24:25.271 [2024-07-26 03:52:40.065267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.271 [2024-07-26 03:52:40.081922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.271 [2024-07-26 03:52:40.081964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:25.271 [2024-07-26 03:52:40.081982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.629 ms 00:24:25.271 [2024-07-26 03:52:40.081993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.271 [2024-07-26 03:52:40.082474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.271 [2024-07-26 03:52:40.082504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:25.271 [2024-07-26 03:52:40.082518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:24:25.271 [2024-07-26 03:52:40.082529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.271 [2024-07-26 03:52:40.122631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.271 [2024-07-26 03:52:40.122705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:25.271 [2024-07-26 03:52:40.122726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.271 [2024-07-26 03:52:40.122739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.271 [2024-07-26 03:52:40.122882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.271 [2024-07-26 03:52:40.122906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:25.271 [2024-07-26 03:52:40.122919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.271 [2024-07-26 03:52:40.122931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.271 [2024-07-26 03:52:40.123005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.271 [2024-07-26 03:52:40.123024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:25.271 [2024-07-26 03:52:40.123036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.271 [2024-07-26 03:52:40.123047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.271 [2024-07-26 03:52:40.123072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.271 [2024-07-26 03:52:40.123085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:25.271 [2024-07-26 03:52:40.123102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.271 [2024-07-26 03:52:40.123113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.530 [2024-07-26 03:52:40.222519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.530 [2024-07-26 03:52:40.222597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:25.530 [2024-07-26 03:52:40.222617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.530 [2024-07-26 03:52:40.222629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.530 [2024-07-26 03:52:40.307503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.530 [2024-07-26 03:52:40.307576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:25.530 [2024-07-26 03:52:40.307596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.530 [2024-07-26 03:52:40.307608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.530 [2024-07-26 03:52:40.307693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.530 [2024-07-26 03:52:40.307712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:25.530 [2024-07-26 03:52:40.307724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.530 [2024-07-26 03:52:40.307735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.530 [2024-07-26 03:52:40.307770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.530 [2024-07-26 03:52:40.307784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:25.530 [2024-07-26 03:52:40.307795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.530 [2024-07-26 03:52:40.307812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.530 [2024-07-26 03:52:40.307973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.530 [2024-07-26 03:52:40.307993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:25.530 [2024-07-26 03:52:40.308005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.530 [2024-07-26 03:52:40.308017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.530 [2024-07-26 03:52:40.308066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.530 [2024-07-26 03:52:40.308083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:25.530 [2024-07-26 03:52:40.308095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.530 [2024-07-26 03:52:40.308107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.530 [2024-07-26 03:52:40.308160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.530 [2024-07-26 03:52:40.308175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:25.530 [2024-07-26 03:52:40.308187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.530 [2024-07-26 03:52:40.308198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.530 [2024-07-26 03:52:40.308251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.530 [2024-07-26 03:52:40.308268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:25.530 [2024-07-26 03:52:40.308280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.530 [2024-07-26 03:52:40.308296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.530 [2024-07-26 03:52:40.308457] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 437.676 ms, result 0 00:24:26.476 00:24:26.476 00:24:26.735 03:52:41 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:24:26.735 03:52:41 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:24:27.302 03:52:41 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:27.302 [2024-07-26 03:52:42.049878] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:24:27.302 [2024-07-26 03:52:42.050023] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81606 ] 00:24:27.561 [2024-07-26 03:52:42.213141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.561 [2024-07-26 03:52:42.400854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.819 [2024-07-26 03:52:42.710627] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:27.819 [2024-07-26 03:52:42.710712] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:28.080 [2024-07-26 03:52:42.872121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.080 [2024-07-26 03:52:42.872190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:28.080 [2024-07-26 03:52:42.872211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:28.080 [2024-07-26 03:52:42.872223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.080 [2024-07-26 03:52:42.875461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.080 [2024-07-26 03:52:42.875508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:28.080 [2024-07-26 03:52:42.875526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.209 ms 00:24:28.080 [2024-07-26 03:52:42.875538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.080 [2024-07-26 03:52:42.875677] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:28.080 [2024-07-26 03:52:42.876752] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:28.080 [2024-07-26 03:52:42.876796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.080 [2024-07-26 03:52:42.876811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:28.080 [2024-07-26 03:52:42.876843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.130 ms 00:24:28.080 [2024-07-26 03:52:42.876855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.080 [2024-07-26 03:52:42.877990] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:28.080 [2024-07-26 03:52:42.894489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.080 [2024-07-26 03:52:42.894535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:28.080 [2024-07-26 03:52:42.894559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.501 ms 00:24:28.080 [2024-07-26 03:52:42.894572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.080 [2024-07-26 03:52:42.894701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.080 [2024-07-26 03:52:42.894724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:28.080 [2024-07-26 03:52:42.894738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:28.080 [2024-07-26 03:52:42.894749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.080 [2024-07-26 03:52:42.899068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.080 [2024-07-26 03:52:42.899114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:28.080 [2024-07-26 03:52:42.899130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.261 ms 00:24:28.080 [2024-07-26 03:52:42.899142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.080 [2024-07-26 03:52:42.899268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.080 [2024-07-26 03:52:42.899289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:28.080 [2024-07-26 03:52:42.899302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:24:28.080 [2024-07-26 03:52:42.899313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.080 [2024-07-26 03:52:42.899356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.080 [2024-07-26 03:52:42.899372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:28.080 [2024-07-26 03:52:42.899388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:28.080 [2024-07-26 03:52:42.899399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.080 [2024-07-26 03:52:42.899432] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:28.080 [2024-07-26 03:52:42.903736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.080 [2024-07-26 03:52:42.903778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:28.080 [2024-07-26 03:52:42.903795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.313 ms 00:24:28.080 [2024-07-26 03:52:42.903806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.080 [2024-07-26 03:52:42.903901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.080 [2024-07-26 03:52:42.903922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:28.080 [2024-07-26 03:52:42.903935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:28.080 [2024-07-26 03:52:42.903946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.080 [2024-07-26 03:52:42.903979] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:28.080 [2024-07-26 03:52:42.904008] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:28.080 [2024-07-26 03:52:42.904057] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:28.080 [2024-07-26 03:52:42.904079] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:24:28.080 [2024-07-26 03:52:42.904190] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:28.080 [2024-07-26 03:52:42.904209] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:28.080 [2024-07-26 03:52:42.904224] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:24:28.080 [2024-07-26 03:52:42.904240] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:28.080 [2024-07-26 03:52:42.904253] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:28.080 [2024-07-26 03:52:42.904271] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:28.080 [2024-07-26 03:52:42.904282] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:28.080 [2024-07-26 03:52:42.904294] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:28.080 [2024-07-26 03:52:42.904305] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:28.080 [2024-07-26 03:52:42.904317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.080 [2024-07-26 03:52:42.904328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:28.080 [2024-07-26 03:52:42.904340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:24:28.080 [2024-07-26 03:52:42.904351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.080 [2024-07-26 03:52:42.904449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.080 [2024-07-26 03:52:42.904465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:28.080 [2024-07-26 03:52:42.904481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:28.080 [2024-07-26 03:52:42.904492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.080 [2024-07-26 03:52:42.904600] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:28.080 [2024-07-26 03:52:42.904616] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:28.080 [2024-07-26 03:52:42.904638] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:28.080 [2024-07-26 03:52:42.904651] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:28.080 [2024-07-26 03:52:42.904662] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:28.080 [2024-07-26 03:52:42.904673] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:28.080 [2024-07-26 03:52:42.904683] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:28.080 [2024-07-26 03:52:42.904695] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:28.080 [2024-07-26 03:52:42.904705] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:28.080 [2024-07-26 03:52:42.904715] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:28.080 [2024-07-26 03:52:42.904726] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:28.080 [2024-07-26 03:52:42.904736] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:28.080 [2024-07-26 03:52:42.904746] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:28.080 [2024-07-26 03:52:42.904757] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:28.080 [2024-07-26 03:52:42.904768] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:28.080 [2024-07-26 03:52:42.904778] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:28.080 [2024-07-26 03:52:42.904789] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:28.080 [2024-07-26 03:52:42.904799] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:28.080 [2024-07-26 03:52:42.904846] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:28.080 [2024-07-26 03:52:42.904859] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:28.080 [2024-07-26 03:52:42.904870] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:28.080 [2024-07-26 03:52:42.904880] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:28.080 [2024-07-26 03:52:42.904892] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:28.080 [2024-07-26 03:52:42.904903] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:28.080 [2024-07-26 03:52:42.904912] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:28.080 [2024-07-26 03:52:42.904923] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:28.080 [2024-07-26 03:52:42.904933] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:28.080 [2024-07-26 03:52:42.904943] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:28.080 [2024-07-26 03:52:42.904957] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:28.080 [2024-07-26 03:52:42.904967] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:28.081 [2024-07-26 03:52:42.904977] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:28.081 [2024-07-26 03:52:42.904987] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:28.081 [2024-07-26 03:52:42.904997] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:28.081 [2024-07-26 03:52:42.905007] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:28.081 [2024-07-26 03:52:42.905017] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:28.081 [2024-07-26 03:52:42.905027] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:28.081 [2024-07-26 03:52:42.905037] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:28.081 [2024-07-26 03:52:42.905047] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:28.081 [2024-07-26 03:52:42.905058] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:28.081 [2024-07-26 03:52:42.905068] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:28.081 [2024-07-26 03:52:42.905078] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:28.081 [2024-07-26 03:52:42.905088] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:28.081 [2024-07-26 03:52:42.905098] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:28.081 [2024-07-26 03:52:42.905107] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:28.081 [2024-07-26 03:52:42.905119] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:28.081 [2024-07-26 03:52:42.905130] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:28.081 [2024-07-26 03:52:42.905141] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:28.081 [2024-07-26 03:52:42.905158] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:28.081 [2024-07-26 03:52:42.905169] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:28.081 [2024-07-26 03:52:42.905180] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:28.081 [2024-07-26 03:52:42.905191] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:28.081 [2024-07-26 03:52:42.905201] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:28.081 [2024-07-26 03:52:42.905212] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:28.081 [2024-07-26 03:52:42.905223] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:28.081 [2024-07-26 03:52:42.905237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:28.081 [2024-07-26 03:52:42.905259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:28.081 [2024-07-26 03:52:42.905271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:28.081 [2024-07-26 03:52:42.905282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:28.081 [2024-07-26 03:52:42.905293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:28.081 [2024-07-26 03:52:42.905305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:28.081 [2024-07-26 03:52:42.905316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:28.081 [2024-07-26 03:52:42.905327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:28.081 [2024-07-26 03:52:42.905338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:28.081 [2024-07-26 03:52:42.905349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:28.081 [2024-07-26 03:52:42.905360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:28.081 [2024-07-26 03:52:42.905371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:28.081 [2024-07-26 03:52:42.905382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:28.081 [2024-07-26 03:52:42.905394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:28.081 [2024-07-26 03:52:42.905405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:28.081 [2024-07-26 03:52:42.905417] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:28.081 [2024-07-26 03:52:42.905430] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:28.081 [2024-07-26 03:52:42.905449] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:28.081 [2024-07-26 03:52:42.905460] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:28.081 [2024-07-26 03:52:42.905471] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:28.081 [2024-07-26 03:52:42.905483] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:28.081 [2024-07-26 03:52:42.905496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.081 [2024-07-26 03:52:42.905508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:28.081 [2024-07-26 03:52:42.905521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.962 ms 00:24:28.081 [2024-07-26 03:52:42.905531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.081 [2024-07-26 03:52:42.944946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.081 [2024-07-26 03:52:42.945010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:28.081 [2024-07-26 03:52:42.945038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.315 ms 00:24:28.081 [2024-07-26 03:52:42.945050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.081 [2024-07-26 03:52:42.945252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.081 [2024-07-26 03:52:42.945274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:28.081 [2024-07-26 03:52:42.945294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:28.081 [2024-07-26 03:52:42.945306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.341 [2024-07-26 03:52:42.984203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.341 [2024-07-26 03:52:42.984264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:28.341 [2024-07-26 03:52:42.984285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.863 ms 00:24:28.341 [2024-07-26 03:52:42.984297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.341 [2024-07-26 03:52:42.984450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.341 [2024-07-26 03:52:42.984471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:28.341 [2024-07-26 03:52:42.984485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:28.341 [2024-07-26 03:52:42.984496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.341 [2024-07-26 03:52:42.984847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.341 [2024-07-26 03:52:42.984867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:28.341 [2024-07-26 03:52:42.984881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:24:28.341 [2024-07-26 03:52:42.984892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.341 [2024-07-26 03:52:42.985056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.341 [2024-07-26 03:52:42.985075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:28.341 [2024-07-26 03:52:42.985088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:24:28.341 [2024-07-26 03:52:42.985099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.341 [2024-07-26 03:52:43.001368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.341 [2024-07-26 03:52:43.001414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:28.341 [2024-07-26 03:52:43.001433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.239 ms 00:24:28.341 [2024-07-26 03:52:43.001445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.341 [2024-07-26 03:52:43.017838] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:24:28.341 [2024-07-26 03:52:43.017900] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:28.341 [2024-07-26 03:52:43.017921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.341 [2024-07-26 03:52:43.017934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:28.341 [2024-07-26 03:52:43.017947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.306 ms 00:24:28.341 [2024-07-26 03:52:43.017958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.341 [2024-07-26 03:52:43.048248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.341 [2024-07-26 03:52:43.048295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:28.341 [2024-07-26 03:52:43.048313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.182 ms 00:24:28.341 [2024-07-26 03:52:43.048325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.341 [2024-07-26 03:52:43.064129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.341 [2024-07-26 03:52:43.064171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:28.341 [2024-07-26 03:52:43.064189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.701 ms 00:24:28.341 [2024-07-26 03:52:43.064200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.341 [2024-07-26 03:52:43.080227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.341 [2024-07-26 03:52:43.080270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:28.341 [2024-07-26 03:52:43.080287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.932 ms 00:24:28.341 [2024-07-26 03:52:43.080299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.341 [2024-07-26 03:52:43.081112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.341 [2024-07-26 03:52:43.081150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:28.341 [2024-07-26 03:52:43.081166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.685 ms 00:24:28.341 [2024-07-26 03:52:43.081177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.341 [2024-07-26 03:52:43.153839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.341 [2024-07-26 03:52:43.153913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:28.341 [2024-07-26 03:52:43.153934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.624 ms 00:24:28.341 [2024-07-26 03:52:43.153946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.341 [2024-07-26 03:52:43.166913] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:28.341 [2024-07-26 03:52:43.180948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.341 [2024-07-26 03:52:43.181020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:28.341 [2024-07-26 03:52:43.181040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.842 ms 00:24:28.341 [2024-07-26 03:52:43.181052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.341 [2024-07-26 03:52:43.181214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.341 [2024-07-26 03:52:43.181237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:28.341 [2024-07-26 03:52:43.181251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:28.341 [2024-07-26 03:52:43.181262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.341 [2024-07-26 03:52:43.181329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.341 [2024-07-26 03:52:43.181346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:28.341 [2024-07-26 03:52:43.181359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:24:28.341 [2024-07-26 03:52:43.181370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.341 [2024-07-26 03:52:43.181403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.341 [2024-07-26 03:52:43.181423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:28.342 [2024-07-26 03:52:43.181435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:28.342 [2024-07-26 03:52:43.181446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.342 [2024-07-26 03:52:43.181483] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:28.342 [2024-07-26 03:52:43.181500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.342 [2024-07-26 03:52:43.181511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:28.342 [2024-07-26 03:52:43.181523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:28.342 [2024-07-26 03:52:43.181534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.342 [2024-07-26 03:52:43.212749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.342 [2024-07-26 03:52:43.212802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:28.342 [2024-07-26 03:52:43.212844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.183 ms 00:24:28.342 [2024-07-26 03:52:43.212860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.342 [2024-07-26 03:52:43.213005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.342 [2024-07-26 03:52:43.213028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:28.342 [2024-07-26 03:52:43.213041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:24:28.342 [2024-07-26 03:52:43.213052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.342 [2024-07-26 03:52:43.214115] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:28.342 [2024-07-26 03:52:43.218385] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 341.644 ms, result 0 00:24:28.342 [2024-07-26 03:52:43.219137] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:28.342 [2024-07-26 03:52:43.235553] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:28.605  Copying: 4096/4096 [kB] (average 27 MBps)[2024-07-26 03:52:43.384035] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:28.605 [2024-07-26 03:52:43.396463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.605 [2024-07-26 03:52:43.396509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:28.605 [2024-07-26 03:52:43.396529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:28.605 [2024-07-26 03:52:43.396561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.605 [2024-07-26 03:52:43.396612] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:28.605 [2024-07-26 03:52:43.400059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.605 [2024-07-26 03:52:43.400096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:28.605 [2024-07-26 03:52:43.400112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.408 ms 00:24:28.605 [2024-07-26 03:52:43.400123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.605 [2024-07-26 03:52:43.401556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.605 [2024-07-26 03:52:43.401608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:28.605 [2024-07-26 03:52:43.401626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.401 ms 00:24:28.605 [2024-07-26 03:52:43.401638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.605 [2024-07-26 03:52:43.405751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.605 [2024-07-26 03:52:43.405793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:28.605 [2024-07-26 03:52:43.405828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.088 ms 00:24:28.605 [2024-07-26 03:52:43.405842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.605 [2024-07-26 03:52:43.413490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.605 [2024-07-26 03:52:43.413527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:28.605 [2024-07-26 03:52:43.413543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.604 ms 00:24:28.605 [2024-07-26 03:52:43.413554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.605 [2024-07-26 03:52:43.444711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.605 [2024-07-26 03:52:43.444771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:28.605 [2024-07-26 03:52:43.444791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.081 ms 00:24:28.605 [2024-07-26 03:52:43.444802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.605 [2024-07-26 03:52:43.462859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.605 [2024-07-26 03:52:43.462904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:28.605 [2024-07-26 03:52:43.462922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.960 ms 00:24:28.605 [2024-07-26 03:52:43.462941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.605 [2024-07-26 03:52:43.463108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.605 [2024-07-26 03:52:43.463130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:28.605 [2024-07-26 03:52:43.463143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:24:28.605 [2024-07-26 03:52:43.463155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.605 [2024-07-26 03:52:43.494876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.605 [2024-07-26 03:52:43.494923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:28.605 [2024-07-26 03:52:43.494941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.697 ms 00:24:28.605 [2024-07-26 03:52:43.494969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.872 [2024-07-26 03:52:43.526126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.872 [2024-07-26 03:52:43.526184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:28.872 [2024-07-26 03:52:43.526203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.081 ms 00:24:28.872 [2024-07-26 03:52:43.526215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.872 [2024-07-26 03:52:43.557507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.872 [2024-07-26 03:52:43.557561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:28.872 [2024-07-26 03:52:43.557580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.218 ms 00:24:28.872 [2024-07-26 03:52:43.557592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.872 [2024-07-26 03:52:43.588610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.872 [2024-07-26 03:52:43.588661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:28.872 [2024-07-26 03:52:43.588679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.916 ms 00:24:28.872 [2024-07-26 03:52:43.588690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.872 [2024-07-26 03:52:43.588759] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:28.872 [2024-07-26 03:52:43.588785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:28.872 [2024-07-26 03:52:43.588800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:28.872 [2024-07-26 03:52:43.588812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:28.872 [2024-07-26 03:52:43.588870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:28.872 [2024-07-26 03:52:43.588888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.588900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.588912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.588924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.588935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.588948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.588979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.588994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.589987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.590005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.590016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.590028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.590039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.590051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.590062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.590074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.590086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.590097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.590109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.590120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.590131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.590143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.590154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.590166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.590177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.590189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.590200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.590212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.590224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:28.873 [2024-07-26 03:52:43.590245] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:28.873 [2024-07-26 03:52:43.590257] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f7b734ba-648f-487f-8a7d-88c9b2c09583 00:24:28.873 [2024-07-26 03:52:43.590268] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:28.873 [2024-07-26 03:52:43.590279] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:28.873 [2024-07-26 03:52:43.590306] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:28.873 [2024-07-26 03:52:43.590318] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:28.873 [2024-07-26 03:52:43.590329] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:28.873 [2024-07-26 03:52:43.590340] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:28.873 [2024-07-26 03:52:43.590352] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:28.873 [2024-07-26 03:52:43.590362] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:28.873 [2024-07-26 03:52:43.590371] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:28.873 [2024-07-26 03:52:43.590383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.873 [2024-07-26 03:52:43.590395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:28.873 [2024-07-26 03:52:43.590413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.626 ms 00:24:28.873 [2024-07-26 03:52:43.590424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.873 [2024-07-26 03:52:43.607088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.873 [2024-07-26 03:52:43.607132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:28.873 [2024-07-26 03:52:43.607149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.635 ms 00:24:28.873 [2024-07-26 03:52:43.607161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.873 [2024-07-26 03:52:43.607627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.873 [2024-07-26 03:52:43.607651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:28.873 [2024-07-26 03:52:43.607665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:24:28.873 [2024-07-26 03:52:43.607676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.873 [2024-07-26 03:52:43.648202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.873 [2024-07-26 03:52:43.648269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:28.873 [2024-07-26 03:52:43.648288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.873 [2024-07-26 03:52:43.648300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.874 [2024-07-26 03:52:43.648444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.874 [2024-07-26 03:52:43.648463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:28.874 [2024-07-26 03:52:43.648476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.874 [2024-07-26 03:52:43.648487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.874 [2024-07-26 03:52:43.648553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.874 [2024-07-26 03:52:43.648571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:28.874 [2024-07-26 03:52:43.648584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.874 [2024-07-26 03:52:43.648596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.874 [2024-07-26 03:52:43.648621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.874 [2024-07-26 03:52:43.648640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:28.874 [2024-07-26 03:52:43.648652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.874 [2024-07-26 03:52:43.648663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.874 [2024-07-26 03:52:43.747918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.874 [2024-07-26 03:52:43.748005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:28.874 [2024-07-26 03:52:43.748026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.874 [2024-07-26 03:52:43.748038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.133 [2024-07-26 03:52:43.833125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.133 [2024-07-26 03:52:43.833205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:29.133 [2024-07-26 03:52:43.833226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.133 [2024-07-26 03:52:43.833238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.133 [2024-07-26 03:52:43.833354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.133 [2024-07-26 03:52:43.833383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:29.133 [2024-07-26 03:52:43.833395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.133 [2024-07-26 03:52:43.833407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.133 [2024-07-26 03:52:43.833443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.133 [2024-07-26 03:52:43.833457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:29.133 [2024-07-26 03:52:43.833469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.133 [2024-07-26 03:52:43.833485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.133 [2024-07-26 03:52:43.833608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.133 [2024-07-26 03:52:43.833626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:29.133 [2024-07-26 03:52:43.833639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.133 [2024-07-26 03:52:43.833650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.133 [2024-07-26 03:52:43.833697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.133 [2024-07-26 03:52:43.833715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:29.133 [2024-07-26 03:52:43.833727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.133 [2024-07-26 03:52:43.833744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.133 [2024-07-26 03:52:43.833791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.133 [2024-07-26 03:52:43.833807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:29.133 [2024-07-26 03:52:43.833849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.133 [2024-07-26 03:52:43.833863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.133 [2024-07-26 03:52:43.833927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.133 [2024-07-26 03:52:43.833944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:29.133 [2024-07-26 03:52:43.833957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.133 [2024-07-26 03:52:43.833973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.133 [2024-07-26 03:52:43.834136] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 437.683 ms, result 0 00:24:30.069 00:24:30.069 00:24:30.069 03:52:44 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=81637 00:24:30.069 03:52:44 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:24:30.070 03:52:44 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 81637 00:24:30.070 03:52:44 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 81637 ']' 00:24:30.070 03:52:44 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.070 03:52:44 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:30.070 03:52:44 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.070 03:52:44 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:30.070 03:52:44 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:24:30.328 [2024-07-26 03:52:45.023144] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:24:30.328 [2024-07-26 03:52:45.023316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81637 ] 00:24:30.328 [2024-07-26 03:52:45.196033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.586 [2024-07-26 03:52:45.450853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.521 03:52:46 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:31.521 03:52:46 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:24:31.521 03:52:46 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:24:31.779 [2024-07-26 03:52:46.500540] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:31.779 [2024-07-26 03:52:46.500636] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:31.779 [2024-07-26 03:52:46.678832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.779 [2024-07-26 03:52:46.678904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:31.779 [2024-07-26 03:52:46.678932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:31.779 [2024-07-26 03:52:46.678954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.779 [2024-07-26 03:52:46.682196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.779 [2024-07-26 03:52:46.682246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:31.779 [2024-07-26 03:52:46.682266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.208 ms 00:24:31.779 [2024-07-26 03:52:46.682282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.779 [2024-07-26 03:52:46.682453] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:32.039 [2024-07-26 03:52:46.683492] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:32.039 [2024-07-26 03:52:46.683539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.039 [2024-07-26 03:52:46.683561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:32.039 [2024-07-26 03:52:46.683585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.097 ms 00:24:32.039 [2024-07-26 03:52:46.683606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.039 [2024-07-26 03:52:46.684905] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:32.039 [2024-07-26 03:52:46.701645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.039 [2024-07-26 03:52:46.701703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:32.039 [2024-07-26 03:52:46.701729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.740 ms 00:24:32.039 [2024-07-26 03:52:46.701744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.039 [2024-07-26 03:52:46.701914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.039 [2024-07-26 03:52:46.701940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:32.039 [2024-07-26 03:52:46.701967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:24:32.039 [2024-07-26 03:52:46.701989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.039 [2024-07-26 03:52:46.706589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.039 [2024-07-26 03:52:46.706655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:32.039 [2024-07-26 03:52:46.706683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.518 ms 00:24:32.039 [2024-07-26 03:52:46.706698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.039 [2024-07-26 03:52:46.706875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.039 [2024-07-26 03:52:46.706900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:32.039 [2024-07-26 03:52:46.706918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:24:32.039 [2024-07-26 03:52:46.706936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.039 [2024-07-26 03:52:46.706990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.039 [2024-07-26 03:52:46.707007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:32.039 [2024-07-26 03:52:46.707023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:24:32.039 [2024-07-26 03:52:46.707036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.039 [2024-07-26 03:52:46.707077] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:32.039 [2024-07-26 03:52:46.711355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.039 [2024-07-26 03:52:46.711401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:32.039 [2024-07-26 03:52:46.711427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.291 ms 00:24:32.039 [2024-07-26 03:52:46.711445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.039 [2024-07-26 03:52:46.711520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.039 [2024-07-26 03:52:46.711552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:32.039 [2024-07-26 03:52:46.711569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:32.039 [2024-07-26 03:52:46.711585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.039 [2024-07-26 03:52:46.711623] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:32.039 [2024-07-26 03:52:46.711667] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:32.039 [2024-07-26 03:52:46.711727] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:32.039 [2024-07-26 03:52:46.711760] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:24:32.039 [2024-07-26 03:52:46.711901] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:32.039 [2024-07-26 03:52:46.711946] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:32.039 [2024-07-26 03:52:46.711963] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:24:32.039 [2024-07-26 03:52:46.711983] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:32.039 [2024-07-26 03:52:46.712005] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:32.039 [2024-07-26 03:52:46.712026] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:32.039 [2024-07-26 03:52:46.712039] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:32.039 [2024-07-26 03:52:46.712053] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:32.039 [2024-07-26 03:52:46.712066] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:32.039 [2024-07-26 03:52:46.712084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.039 [2024-07-26 03:52:46.712102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:32.039 [2024-07-26 03:52:46.712127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.459 ms 00:24:32.039 [2024-07-26 03:52:46.712144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.039 [2024-07-26 03:52:46.712283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.039 [2024-07-26 03:52:46.712307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:32.039 [2024-07-26 03:52:46.712324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:32.039 [2024-07-26 03:52:46.712338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.039 [2024-07-26 03:52:46.712460] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:32.039 [2024-07-26 03:52:46.712481] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:32.039 [2024-07-26 03:52:46.712497] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:32.039 [2024-07-26 03:52:46.712510] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:32.039 [2024-07-26 03:52:46.712531] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:32.039 [2024-07-26 03:52:46.712544] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:32.039 [2024-07-26 03:52:46.712558] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:32.039 [2024-07-26 03:52:46.712571] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:32.039 [2024-07-26 03:52:46.712588] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:32.039 [2024-07-26 03:52:46.712601] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:32.039 [2024-07-26 03:52:46.712615] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:32.039 [2024-07-26 03:52:46.712628] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:32.039 [2024-07-26 03:52:46.712642] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:32.039 [2024-07-26 03:52:46.712654] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:32.039 [2024-07-26 03:52:46.712669] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:32.039 [2024-07-26 03:52:46.712680] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:32.039 [2024-07-26 03:52:46.712694] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:32.039 [2024-07-26 03:52:46.712714] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:32.039 [2024-07-26 03:52:46.712728] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:32.039 [2024-07-26 03:52:46.712741] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:32.039 [2024-07-26 03:52:46.712756] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:32.039 [2024-07-26 03:52:46.712768] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:32.039 [2024-07-26 03:52:46.712782] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:32.039 [2024-07-26 03:52:46.712794] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:32.039 [2024-07-26 03:52:46.712810] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:32.039 [2024-07-26 03:52:46.712840] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:32.039 [2024-07-26 03:52:46.712856] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:32.039 [2024-07-26 03:52:46.712879] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:32.039 [2024-07-26 03:52:46.712897] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:32.039 [2024-07-26 03:52:46.712909] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:32.039 [2024-07-26 03:52:46.712923] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:32.039 [2024-07-26 03:52:46.712936] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:32.039 [2024-07-26 03:52:46.712950] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:32.039 [2024-07-26 03:52:46.712963] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:32.039 [2024-07-26 03:52:46.712977] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:32.039 [2024-07-26 03:52:46.712989] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:32.039 [2024-07-26 03:52:46.713003] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:32.039 [2024-07-26 03:52:46.713015] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:32.039 [2024-07-26 03:52:46.713029] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:32.039 [2024-07-26 03:52:46.713041] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:32.039 [2024-07-26 03:52:46.713057] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:32.039 [2024-07-26 03:52:46.713070] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:32.039 [2024-07-26 03:52:46.713083] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:32.040 [2024-07-26 03:52:46.713095] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:32.040 [2024-07-26 03:52:46.713110] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:32.040 [2024-07-26 03:52:46.713123] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:32.040 [2024-07-26 03:52:46.713137] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:32.040 [2024-07-26 03:52:46.713151] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:32.040 [2024-07-26 03:52:46.713165] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:32.040 [2024-07-26 03:52:46.713177] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:32.040 [2024-07-26 03:52:46.713192] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:32.040 [2024-07-26 03:52:46.713204] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:32.040 [2024-07-26 03:52:46.713218] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:32.040 [2024-07-26 03:52:46.713232] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:32.040 [2024-07-26 03:52:46.713250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:32.040 [2024-07-26 03:52:46.713264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:32.040 [2024-07-26 03:52:46.713283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:32.040 [2024-07-26 03:52:46.713297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:32.040 [2024-07-26 03:52:46.713312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:32.040 [2024-07-26 03:52:46.713325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:32.040 [2024-07-26 03:52:46.713340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:32.040 [2024-07-26 03:52:46.713353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:32.040 [2024-07-26 03:52:46.713367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:32.040 [2024-07-26 03:52:46.713380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:32.040 [2024-07-26 03:52:46.713395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:32.040 [2024-07-26 03:52:46.713408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:32.040 [2024-07-26 03:52:46.713423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:32.040 [2024-07-26 03:52:46.713435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:32.040 [2024-07-26 03:52:46.713451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:32.040 [2024-07-26 03:52:46.713463] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:32.040 [2024-07-26 03:52:46.713479] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:32.040 [2024-07-26 03:52:46.713493] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:32.040 [2024-07-26 03:52:46.713511] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:32.040 [2024-07-26 03:52:46.713524] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:32.040 [2024-07-26 03:52:46.713539] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:32.040 [2024-07-26 03:52:46.713554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.040 [2024-07-26 03:52:46.713569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:32.040 [2024-07-26 03:52:46.713583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.164 ms 00:24:32.040 [2024-07-26 03:52:46.713602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.040 [2024-07-26 03:52:46.747958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.040 [2024-07-26 03:52:46.748027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:32.040 [2024-07-26 03:52:46.748054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.276 ms 00:24:32.040 [2024-07-26 03:52:46.748071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.040 [2024-07-26 03:52:46.748259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.040 [2024-07-26 03:52:46.748284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:32.040 [2024-07-26 03:52:46.748300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:24:32.040 [2024-07-26 03:52:46.748315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.040 [2024-07-26 03:52:46.787716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.040 [2024-07-26 03:52:46.787785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:32.040 [2024-07-26 03:52:46.787844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.369 ms 00:24:32.040 [2024-07-26 03:52:46.787872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.040 [2024-07-26 03:52:46.788025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.040 [2024-07-26 03:52:46.788052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:32.040 [2024-07-26 03:52:46.788067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:32.040 [2024-07-26 03:52:46.788088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.040 [2024-07-26 03:52:46.788448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.040 [2024-07-26 03:52:46.788479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:32.040 [2024-07-26 03:52:46.788494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:24:32.040 [2024-07-26 03:52:46.788509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.040 [2024-07-26 03:52:46.788668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.040 [2024-07-26 03:52:46.788690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:32.040 [2024-07-26 03:52:46.788704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:24:32.040 [2024-07-26 03:52:46.788719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.040 [2024-07-26 03:52:46.807097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.040 [2024-07-26 03:52:46.807158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:32.040 [2024-07-26 03:52:46.807179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.346 ms 00:24:32.040 [2024-07-26 03:52:46.807202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.040 [2024-07-26 03:52:46.823940] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:32.040 [2024-07-26 03:52:46.823989] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:32.040 [2024-07-26 03:52:46.824013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.040 [2024-07-26 03:52:46.824030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:32.040 [2024-07-26 03:52:46.824045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.647 ms 00:24:32.040 [2024-07-26 03:52:46.824060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.040 [2024-07-26 03:52:46.854412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.040 [2024-07-26 03:52:46.854466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:32.040 [2024-07-26 03:52:46.854486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.255 ms 00:24:32.040 [2024-07-26 03:52:46.854506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.040 [2024-07-26 03:52:46.870694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.040 [2024-07-26 03:52:46.870745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:32.040 [2024-07-26 03:52:46.870783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.072 ms 00:24:32.040 [2024-07-26 03:52:46.870806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.040 [2024-07-26 03:52:46.886803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.040 [2024-07-26 03:52:46.886859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:32.040 [2024-07-26 03:52:46.886878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.781 ms 00:24:32.040 [2024-07-26 03:52:46.886894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.040 [2024-07-26 03:52:46.887710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.040 [2024-07-26 03:52:46.887746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:32.040 [2024-07-26 03:52:46.887763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.681 ms 00:24:32.040 [2024-07-26 03:52:46.887778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.299 [2024-07-26 03:52:46.969900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.299 [2024-07-26 03:52:46.969999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:32.299 [2024-07-26 03:52:46.970025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.087 ms 00:24:32.299 [2024-07-26 03:52:46.970042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.299 [2024-07-26 03:52:46.983061] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:32.299 [2024-07-26 03:52:46.997313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.299 [2024-07-26 03:52:46.997383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:32.299 [2024-07-26 03:52:46.997412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.097 ms 00:24:32.299 [2024-07-26 03:52:46.997427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.299 [2024-07-26 03:52:46.997566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.299 [2024-07-26 03:52:46.997587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:32.299 [2024-07-26 03:52:46.997604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:32.299 [2024-07-26 03:52:46.997617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.299 [2024-07-26 03:52:46.997688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.299 [2024-07-26 03:52:46.997706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:32.299 [2024-07-26 03:52:46.997726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:32.299 [2024-07-26 03:52:46.997739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.299 [2024-07-26 03:52:46.997777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.299 [2024-07-26 03:52:46.997793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:32.299 [2024-07-26 03:52:46.997809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:32.299 [2024-07-26 03:52:46.997847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.299 [2024-07-26 03:52:46.997898] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:32.299 [2024-07-26 03:52:46.997916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.299 [2024-07-26 03:52:46.997933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:32.299 [2024-07-26 03:52:46.997948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:24:32.299 [2024-07-26 03:52:46.997966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.299 [2024-07-26 03:52:47.029286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.299 [2024-07-26 03:52:47.029334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:32.299 [2024-07-26 03:52:47.029355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.287 ms 00:24:32.300 [2024-07-26 03:52:47.029372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.300 [2024-07-26 03:52:47.029504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.300 [2024-07-26 03:52:47.029535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:32.300 [2024-07-26 03:52:47.029563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:24:32.300 [2024-07-26 03:52:47.029587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.300 [2024-07-26 03:52:47.030641] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:32.300 [2024-07-26 03:52:47.034906] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 351.448 ms, result 0 00:24:32.300 [2024-07-26 03:52:47.036014] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:32.300 Some configs were skipped because the RPC state that can call them passed over. 00:24:32.300 03:52:47 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:24:32.558 [2024-07-26 03:52:47.346397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.558 [2024-07-26 03:52:47.346471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:24:32.558 [2024-07-26 03:52:47.346501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.408 ms 00:24:32.558 [2024-07-26 03:52:47.346517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.558 [2024-07-26 03:52:47.346573] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.604 ms, result 0 00:24:32.558 true 00:24:32.558 03:52:47 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:24:32.816 [2024-07-26 03:52:47.590161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.816 [2024-07-26 03:52:47.590233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:24:32.816 [2024-07-26 03:52:47.590256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.889 ms 00:24:32.816 [2024-07-26 03:52:47.590273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.817 [2024-07-26 03:52:47.590326] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.054 ms, result 0 00:24:32.817 true 00:24:32.817 03:52:47 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 81637 00:24:32.817 03:52:47 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 81637 ']' 00:24:32.817 03:52:47 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 81637 00:24:32.817 03:52:47 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:24:32.817 03:52:47 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:32.817 03:52:47 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81637 00:24:32.817 killing process with pid 81637 00:24:32.817 03:52:47 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:32.817 03:52:47 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:32.817 03:52:47 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81637' 00:24:32.817 03:52:47 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 81637 00:24:32.817 03:52:47 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 81637 00:24:33.752 [2024-07-26 03:52:48.587300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.752 [2024-07-26 03:52:48.587376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:33.752 [2024-07-26 03:52:48.587402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:33.752 [2024-07-26 03:52:48.587420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.753 [2024-07-26 03:52:48.587458] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:33.753 [2024-07-26 03:52:48.590966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.753 [2024-07-26 03:52:48.591010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:33.753 [2024-07-26 03:52:48.591028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.482 ms 00:24:33.753 [2024-07-26 03:52:48.591047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.753 [2024-07-26 03:52:48.591355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.753 [2024-07-26 03:52:48.591388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:33.753 [2024-07-26 03:52:48.591405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:24:33.753 [2024-07-26 03:52:48.591420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.753 [2024-07-26 03:52:48.595651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.753 [2024-07-26 03:52:48.595707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:33.753 [2024-07-26 03:52:48.595726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.204 ms 00:24:33.753 [2024-07-26 03:52:48.595741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.753 [2024-07-26 03:52:48.603478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.753 [2024-07-26 03:52:48.603526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:33.753 [2024-07-26 03:52:48.603564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.689 ms 00:24:33.753 [2024-07-26 03:52:48.603582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.753 [2024-07-26 03:52:48.616097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.753 [2024-07-26 03:52:48.616147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:33.753 [2024-07-26 03:52:48.616166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.446 ms 00:24:33.753 [2024-07-26 03:52:48.616184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.753 [2024-07-26 03:52:48.624770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.753 [2024-07-26 03:52:48.624846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:33.753 [2024-07-26 03:52:48.624868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.538 ms 00:24:33.753 [2024-07-26 03:52:48.624884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.753 [2024-07-26 03:52:48.625061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.753 [2024-07-26 03:52:48.625103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:33.753 [2024-07-26 03:52:48.625125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:24:33.753 [2024-07-26 03:52:48.625155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.753 [2024-07-26 03:52:48.638147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.753 [2024-07-26 03:52:48.638198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:33.753 [2024-07-26 03:52:48.638217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.963 ms 00:24:33.753 [2024-07-26 03:52:48.638232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.753 [2024-07-26 03:52:48.651088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.753 [2024-07-26 03:52:48.651142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:33.753 [2024-07-26 03:52:48.651161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.805 ms 00:24:33.753 [2024-07-26 03:52:48.651186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.012 [2024-07-26 03:52:48.663536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.012 [2024-07-26 03:52:48.663587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:34.012 [2024-07-26 03:52:48.663605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.292 ms 00:24:34.012 [2024-07-26 03:52:48.663621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.012 [2024-07-26 03:52:48.675900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.012 [2024-07-26 03:52:48.675949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:34.012 [2024-07-26 03:52:48.675968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.200 ms 00:24:34.012 [2024-07-26 03:52:48.675984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.012 [2024-07-26 03:52:48.676033] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:34.012 [2024-07-26 03:52:48.676063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:34.012 [2024-07-26 03:52:48.676680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.676696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.676709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.676725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.676738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.676754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.676767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.676782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.676796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.676811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.676848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.676865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.676879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.676897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.676910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.676925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.676938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.676954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.676968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.676983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.676996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.677011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.677025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.677040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.677054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.677071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.677084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.677100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.677113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.677130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.677144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.677159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.677173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.677188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.677201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.677217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.677230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.677245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.677258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.677275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.677297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.677318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.677332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.677347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.677360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.677386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.677409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.677426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.677440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.677455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.677474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.677497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.677511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.677526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.677540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.677556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.677576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.677598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.677612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:34.013 [2024-07-26 03:52:48.677636] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:34.013 [2024-07-26 03:52:48.677650] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f7b734ba-648f-487f-8a7d-88c9b2c09583 00:24:34.013 [2024-07-26 03:52:48.677679] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:34.013 [2024-07-26 03:52:48.677694] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:34.013 [2024-07-26 03:52:48.677710] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:34.013 [2024-07-26 03:52:48.677723] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:34.013 [2024-07-26 03:52:48.677738] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:34.013 [2024-07-26 03:52:48.677751] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:34.013 [2024-07-26 03:52:48.677767] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:34.013 [2024-07-26 03:52:48.677786] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:34.013 [2024-07-26 03:52:48.677839] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:34.013 [2024-07-26 03:52:48.677858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.013 [2024-07-26 03:52:48.677883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:34.013 [2024-07-26 03:52:48.677898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.827 ms 00:24:34.013 [2024-07-26 03:52:48.677917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.013 [2024-07-26 03:52:48.695116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.013 [2024-07-26 03:52:48.695184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:34.013 [2024-07-26 03:52:48.695205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.135 ms 00:24:34.013 [2024-07-26 03:52:48.695225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.013 [2024-07-26 03:52:48.695748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.013 [2024-07-26 03:52:48.695788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:34.013 [2024-07-26 03:52:48.695808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:24:34.013 [2024-07-26 03:52:48.695852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.013 [2024-07-26 03:52:48.752244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:34.013 [2024-07-26 03:52:48.752328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:34.013 [2024-07-26 03:52:48.752352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:34.013 [2024-07-26 03:52:48.752368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.013 [2024-07-26 03:52:48.752525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:34.013 [2024-07-26 03:52:48.752552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:34.013 [2024-07-26 03:52:48.752574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:34.013 [2024-07-26 03:52:48.752596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.013 [2024-07-26 03:52:48.752667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:34.013 [2024-07-26 03:52:48.752702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:34.013 [2024-07-26 03:52:48.752719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:34.013 [2024-07-26 03:52:48.752737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.013 [2024-07-26 03:52:48.752771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:34.013 [2024-07-26 03:52:48.752794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:34.013 [2024-07-26 03:52:48.752809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:34.013 [2024-07-26 03:52:48.752846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.014 [2024-07-26 03:52:48.854052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:34.014 [2024-07-26 03:52:48.854129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:34.014 [2024-07-26 03:52:48.854151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:34.014 [2024-07-26 03:52:48.854167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.273 [2024-07-26 03:52:48.940290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:34.273 [2024-07-26 03:52:48.940365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:34.273 [2024-07-26 03:52:48.940390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:34.273 [2024-07-26 03:52:48.940406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.273 [2024-07-26 03:52:48.940515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:34.273 [2024-07-26 03:52:48.940540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:34.273 [2024-07-26 03:52:48.940554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:34.273 [2024-07-26 03:52:48.940571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.273 [2024-07-26 03:52:48.940609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:34.273 [2024-07-26 03:52:48.940628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:34.273 [2024-07-26 03:52:48.940642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:34.273 [2024-07-26 03:52:48.940656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.273 [2024-07-26 03:52:48.940787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:34.273 [2024-07-26 03:52:48.940832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:34.273 [2024-07-26 03:52:48.940851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:34.273 [2024-07-26 03:52:48.940868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.273 [2024-07-26 03:52:48.940925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:34.273 [2024-07-26 03:52:48.940948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:34.273 [2024-07-26 03:52:48.940961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:34.273 [2024-07-26 03:52:48.940977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.273 [2024-07-26 03:52:48.941029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:34.273 [2024-07-26 03:52:48.941055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:34.273 [2024-07-26 03:52:48.941070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:34.273 [2024-07-26 03:52:48.941088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.273 [2024-07-26 03:52:48.941145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:34.273 [2024-07-26 03:52:48.941166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:34.273 [2024-07-26 03:52:48.941180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:34.273 [2024-07-26 03:52:48.941195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.273 [2024-07-26 03:52:48.941356] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 354.042 ms, result 0 00:24:35.208 03:52:49 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:35.208 [2024-07-26 03:52:49.992436] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:24:35.208 [2024-07-26 03:52:49.992612] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81702 ] 00:24:35.466 [2024-07-26 03:52:50.161488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.724 [2024-07-26 03:52:50.392412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.982 [2024-07-26 03:52:50.725071] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:35.982 [2024-07-26 03:52:50.725147] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:35.982 [2024-07-26 03:52:50.885636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.982 [2024-07-26 03:52:50.885703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:35.982 [2024-07-26 03:52:50.885724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:35.982 [2024-07-26 03:52:50.885737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.242 [2024-07-26 03:52:50.888895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.242 [2024-07-26 03:52:50.888940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:36.242 [2024-07-26 03:52:50.888957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.127 ms 00:24:36.242 [2024-07-26 03:52:50.888969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.242 [2024-07-26 03:52:50.889168] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:36.242 [2024-07-26 03:52:50.890116] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:36.242 [2024-07-26 03:52:50.890157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.242 [2024-07-26 03:52:50.890172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:36.242 [2024-07-26 03:52:50.890185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.001 ms 00:24:36.242 [2024-07-26 03:52:50.890196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.242 [2024-07-26 03:52:50.891531] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:36.242 [2024-07-26 03:52:50.907750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.242 [2024-07-26 03:52:50.907795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:36.242 [2024-07-26 03:52:50.907836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.221 ms 00:24:36.242 [2024-07-26 03:52:50.907852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.242 [2024-07-26 03:52:50.908111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.242 [2024-07-26 03:52:50.908143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:36.242 [2024-07-26 03:52:50.908158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:36.242 [2024-07-26 03:52:50.908169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.242 [2024-07-26 03:52:50.912446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.242 [2024-07-26 03:52:50.912492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:36.242 [2024-07-26 03:52:50.912508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.215 ms 00:24:36.242 [2024-07-26 03:52:50.912520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.242 [2024-07-26 03:52:50.912654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.242 [2024-07-26 03:52:50.912675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:36.242 [2024-07-26 03:52:50.912688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:36.242 [2024-07-26 03:52:50.912699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.242 [2024-07-26 03:52:50.912742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.242 [2024-07-26 03:52:50.912764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:36.242 [2024-07-26 03:52:50.912794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:36.242 [2024-07-26 03:52:50.912806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.242 [2024-07-26 03:52:50.912858] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:36.242 [2024-07-26 03:52:50.917049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.242 [2024-07-26 03:52:50.917087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:36.242 [2024-07-26 03:52:50.917102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.201 ms 00:24:36.242 [2024-07-26 03:52:50.917114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.242 [2024-07-26 03:52:50.917182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.242 [2024-07-26 03:52:50.917202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:36.242 [2024-07-26 03:52:50.917215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:36.242 [2024-07-26 03:52:50.917226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.242 [2024-07-26 03:52:50.917257] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:36.242 [2024-07-26 03:52:50.917286] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:36.242 [2024-07-26 03:52:50.917333] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:36.242 [2024-07-26 03:52:50.917365] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:24:36.242 [2024-07-26 03:52:50.917472] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:36.242 [2024-07-26 03:52:50.917488] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:36.242 [2024-07-26 03:52:50.917502] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:24:36.242 [2024-07-26 03:52:50.917517] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:36.242 [2024-07-26 03:52:50.917531] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:36.242 [2024-07-26 03:52:50.917548] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:36.242 [2024-07-26 03:52:50.917560] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:36.242 [2024-07-26 03:52:50.917571] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:36.242 [2024-07-26 03:52:50.917581] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:36.242 [2024-07-26 03:52:50.917594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.242 [2024-07-26 03:52:50.917605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:36.242 [2024-07-26 03:52:50.917618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:24:36.242 [2024-07-26 03:52:50.917629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.242 [2024-07-26 03:52:50.917726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.242 [2024-07-26 03:52:50.917747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:36.242 [2024-07-26 03:52:50.917765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:36.242 [2024-07-26 03:52:50.917776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.242 [2024-07-26 03:52:50.917912] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:36.242 [2024-07-26 03:52:50.917939] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:36.242 [2024-07-26 03:52:50.917954] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:36.242 [2024-07-26 03:52:50.917965] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:36.242 [2024-07-26 03:52:50.917977] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:36.242 [2024-07-26 03:52:50.917989] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:36.242 [2024-07-26 03:52:50.917999] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:36.242 [2024-07-26 03:52:50.918010] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:36.242 [2024-07-26 03:52:50.918020] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:36.242 [2024-07-26 03:52:50.918030] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:36.242 [2024-07-26 03:52:50.918041] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:36.242 [2024-07-26 03:52:50.918051] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:36.242 [2024-07-26 03:52:50.918061] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:36.242 [2024-07-26 03:52:50.918072] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:36.242 [2024-07-26 03:52:50.918082] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:36.242 [2024-07-26 03:52:50.918093] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:36.242 [2024-07-26 03:52:50.918103] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:36.242 [2024-07-26 03:52:50.918114] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:36.242 [2024-07-26 03:52:50.918138] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:36.242 [2024-07-26 03:52:50.918149] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:36.242 [2024-07-26 03:52:50.918159] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:36.243 [2024-07-26 03:52:50.918170] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:36.243 [2024-07-26 03:52:50.918180] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:36.243 [2024-07-26 03:52:50.918190] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:36.243 [2024-07-26 03:52:50.918200] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:36.243 [2024-07-26 03:52:50.918210] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:36.243 [2024-07-26 03:52:50.918221] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:36.243 [2024-07-26 03:52:50.918231] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:36.243 [2024-07-26 03:52:50.918241] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:36.243 [2024-07-26 03:52:50.918251] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:36.243 [2024-07-26 03:52:50.918261] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:36.243 [2024-07-26 03:52:50.918272] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:36.243 [2024-07-26 03:52:50.918282] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:36.243 [2024-07-26 03:52:50.918292] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:36.243 [2024-07-26 03:52:50.918302] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:36.243 [2024-07-26 03:52:50.918313] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:36.243 [2024-07-26 03:52:50.918322] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:36.243 [2024-07-26 03:52:50.918333] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:36.243 [2024-07-26 03:52:50.918343] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:36.243 [2024-07-26 03:52:50.918354] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:36.243 [2024-07-26 03:52:50.918364] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:36.243 [2024-07-26 03:52:50.918374] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:36.243 [2024-07-26 03:52:50.918384] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:36.243 [2024-07-26 03:52:50.918394] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:36.243 [2024-07-26 03:52:50.918406] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:36.243 [2024-07-26 03:52:50.918417] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:36.243 [2024-07-26 03:52:50.918428] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:36.243 [2024-07-26 03:52:50.918446] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:36.243 [2024-07-26 03:52:50.918457] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:36.243 [2024-07-26 03:52:50.918467] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:36.243 [2024-07-26 03:52:50.918478] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:36.243 [2024-07-26 03:52:50.918488] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:36.243 [2024-07-26 03:52:50.918499] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:36.243 [2024-07-26 03:52:50.918510] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:36.243 [2024-07-26 03:52:50.918524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:36.243 [2024-07-26 03:52:50.918537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:36.243 [2024-07-26 03:52:50.918548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:36.243 [2024-07-26 03:52:50.918560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:36.243 [2024-07-26 03:52:50.918571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:36.243 [2024-07-26 03:52:50.918582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:36.243 [2024-07-26 03:52:50.918593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:36.243 [2024-07-26 03:52:50.918616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:36.243 [2024-07-26 03:52:50.918628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:36.243 [2024-07-26 03:52:50.918640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:36.243 [2024-07-26 03:52:50.918652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:36.243 [2024-07-26 03:52:50.918664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:36.243 [2024-07-26 03:52:50.918675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:36.243 [2024-07-26 03:52:50.918686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:36.243 [2024-07-26 03:52:50.918698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:36.243 [2024-07-26 03:52:50.918709] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:36.243 [2024-07-26 03:52:50.918721] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:36.243 [2024-07-26 03:52:50.918734] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:36.243 [2024-07-26 03:52:50.918745] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:36.243 [2024-07-26 03:52:50.918757] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:36.243 [2024-07-26 03:52:50.918769] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:36.243 [2024-07-26 03:52:50.918781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.243 [2024-07-26 03:52:50.918793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:36.243 [2024-07-26 03:52:50.918804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.937 ms 00:24:36.243 [2024-07-26 03:52:50.918828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.243 [2024-07-26 03:52:50.968352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.243 [2024-07-26 03:52:50.968426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:36.243 [2024-07-26 03:52:50.968457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.418 ms 00:24:36.243 [2024-07-26 03:52:50.968473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.243 [2024-07-26 03:52:50.968709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.243 [2024-07-26 03:52:50.968752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:36.243 [2024-07-26 03:52:50.968779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:24:36.243 [2024-07-26 03:52:50.968793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.243 [2024-07-26 03:52:51.016696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.243 [2024-07-26 03:52:51.016769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:36.243 [2024-07-26 03:52:51.016793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.840 ms 00:24:36.243 [2024-07-26 03:52:51.016808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.243 [2024-07-26 03:52:51.017014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.243 [2024-07-26 03:52:51.017051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:36.243 [2024-07-26 03:52:51.017070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:36.243 [2024-07-26 03:52:51.017084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.243 [2024-07-26 03:52:51.017463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.243 [2024-07-26 03:52:51.017498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:36.243 [2024-07-26 03:52:51.017515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:24:36.243 [2024-07-26 03:52:51.017529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.243 [2024-07-26 03:52:51.017726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.243 [2024-07-26 03:52:51.017759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:36.243 [2024-07-26 03:52:51.017776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:24:36.243 [2024-07-26 03:52:51.017790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.243 [2024-07-26 03:52:51.038100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.243 [2024-07-26 03:52:51.038170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:36.243 [2024-07-26 03:52:51.038193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.248 ms 00:24:36.243 [2024-07-26 03:52:51.038207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.243 [2024-07-26 03:52:51.058952] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:36.243 [2024-07-26 03:52:51.059007] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:36.243 [2024-07-26 03:52:51.059031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.243 [2024-07-26 03:52:51.059047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:36.243 [2024-07-26 03:52:51.059063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.608 ms 00:24:36.243 [2024-07-26 03:52:51.059076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.243 [2024-07-26 03:52:51.096113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.243 [2024-07-26 03:52:51.096190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:36.243 [2024-07-26 03:52:51.096213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.903 ms 00:24:36.243 [2024-07-26 03:52:51.096228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.243 [2024-07-26 03:52:51.116489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.243 [2024-07-26 03:52:51.116549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:36.243 [2024-07-26 03:52:51.116571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.046 ms 00:24:36.243 [2024-07-26 03:52:51.116586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.244 [2024-07-26 03:52:51.135879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.244 [2024-07-26 03:52:51.135957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:36.244 [2024-07-26 03:52:51.135980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.112 ms 00:24:36.244 [2024-07-26 03:52:51.135993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.244 [2024-07-26 03:52:51.137079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.244 [2024-07-26 03:52:51.137123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:36.244 [2024-07-26 03:52:51.137141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.892 ms 00:24:36.244 [2024-07-26 03:52:51.137161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.502 [2024-07-26 03:52:51.226149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.502 [2024-07-26 03:52:51.226233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:36.502 [2024-07-26 03:52:51.226259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.943 ms 00:24:36.502 [2024-07-26 03:52:51.226274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.502 [2024-07-26 03:52:51.242034] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:36.502 [2024-07-26 03:52:51.258962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.502 [2024-07-26 03:52:51.259043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:36.502 [2024-07-26 03:52:51.259067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.475 ms 00:24:36.502 [2024-07-26 03:52:51.259082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.502 [2024-07-26 03:52:51.259254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.502 [2024-07-26 03:52:51.259280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:36.502 [2024-07-26 03:52:51.259298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:36.502 [2024-07-26 03:52:51.259322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.502 [2024-07-26 03:52:51.259399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.502 [2024-07-26 03:52:51.259430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:36.502 [2024-07-26 03:52:51.259447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:24:36.502 [2024-07-26 03:52:51.259461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.502 [2024-07-26 03:52:51.259503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.502 [2024-07-26 03:52:51.259528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:36.502 [2024-07-26 03:52:51.259543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:36.502 [2024-07-26 03:52:51.259556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.502 [2024-07-26 03:52:51.259606] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:36.502 [2024-07-26 03:52:51.259639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.502 [2024-07-26 03:52:51.259657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:36.502 [2024-07-26 03:52:51.259674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:24:36.502 [2024-07-26 03:52:51.259689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.502 [2024-07-26 03:52:51.298210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.502 [2024-07-26 03:52:51.298304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:36.502 [2024-07-26 03:52:51.298329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.479 ms 00:24:36.502 [2024-07-26 03:52:51.298343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.502 [2024-07-26 03:52:51.298642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.502 [2024-07-26 03:52:51.298695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:36.502 [2024-07-26 03:52:51.298715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:24:36.502 [2024-07-26 03:52:51.298729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.502 [2024-07-26 03:52:51.299981] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:36.502 [2024-07-26 03:52:51.305214] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 413.918 ms, result 0 00:24:36.502 [2024-07-26 03:52:51.306147] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:36.502 [2024-07-26 03:52:51.325264] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:46.669  Copying: 26/256 [MB] (26 MBps) Copying: 52/256 [MB] (25 MBps) Copying: 79/256 [MB] (27 MBps) Copying: 107/256 [MB] (27 MBps) Copying: 134/256 [MB] (27 MBps) Copying: 161/256 [MB] (26 MBps) Copying: 187/256 [MB] (25 MBps) Copying: 213/256 [MB] (25 MBps) Copying: 239/256 [MB] (25 MBps) Copying: 256/256 [MB] (average 26 MBps)[2024-07-26 03:53:01.356914] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:46.669 [2024-07-26 03:53:01.372681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.669 [2024-07-26 03:53:01.372762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:46.669 [2024-07-26 03:53:01.372789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:46.669 [2024-07-26 03:53:01.372806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.669 [2024-07-26 03:53:01.372878] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:46.669 [2024-07-26 03:53:01.377209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.669 [2024-07-26 03:53:01.377250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:46.669 [2024-07-26 03:53:01.377266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.299 ms 00:24:46.669 [2024-07-26 03:53:01.377278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.669 [2024-07-26 03:53:01.377573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.669 [2024-07-26 03:53:01.377600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:46.669 [2024-07-26 03:53:01.377615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:24:46.669 [2024-07-26 03:53:01.377626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.669 [2024-07-26 03:53:01.381434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.669 [2024-07-26 03:53:01.381471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:46.669 [2024-07-26 03:53:01.381496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.784 ms 00:24:46.669 [2024-07-26 03:53:01.381508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.669 [2024-07-26 03:53:01.389924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.669 [2024-07-26 03:53:01.389977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:46.669 [2024-07-26 03:53:01.389992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.376 ms 00:24:46.669 [2024-07-26 03:53:01.390005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.669 [2024-07-26 03:53:01.424002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.669 [2024-07-26 03:53:01.424095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:46.669 [2024-07-26 03:53:01.424116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.889 ms 00:24:46.669 [2024-07-26 03:53:01.424128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.669 [2024-07-26 03:53:01.441927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.669 [2024-07-26 03:53:01.442006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:46.669 [2024-07-26 03:53:01.442036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.714 ms 00:24:46.669 [2024-07-26 03:53:01.442063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.669 [2024-07-26 03:53:01.442275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.669 [2024-07-26 03:53:01.442299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:46.669 [2024-07-26 03:53:01.442313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:24:46.669 [2024-07-26 03:53:01.442324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.669 [2024-07-26 03:53:01.474029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.669 [2024-07-26 03:53:01.474097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:46.670 [2024-07-26 03:53:01.474118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.678 ms 00:24:46.670 [2024-07-26 03:53:01.474129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.670 [2024-07-26 03:53:01.518445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.670 [2024-07-26 03:53:01.518547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:46.670 [2024-07-26 03:53:01.518579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.227 ms 00:24:46.670 [2024-07-26 03:53:01.518595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.670 [2024-07-26 03:53:01.557607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.670 [2024-07-26 03:53:01.557689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:46.670 [2024-07-26 03:53:01.557713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.871 ms 00:24:46.670 [2024-07-26 03:53:01.557727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.929 [2024-07-26 03:53:01.596236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.929 [2024-07-26 03:53:01.596327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:46.929 [2024-07-26 03:53:01.596351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.332 ms 00:24:46.929 [2024-07-26 03:53:01.596366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.929 [2024-07-26 03:53:01.596463] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:46.929 [2024-07-26 03:53:01.596504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:46.929 [2024-07-26 03:53:01.596522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:46.929 [2024-07-26 03:53:01.596540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:46.929 [2024-07-26 03:53:01.596565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:46.929 [2024-07-26 03:53:01.596590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:46.929 [2024-07-26 03:53:01.596608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:46.929 [2024-07-26 03:53:01.596623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:46.929 [2024-07-26 03:53:01.596637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:46.929 [2024-07-26 03:53:01.596651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:46.929 [2024-07-26 03:53:01.596666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:46.929 [2024-07-26 03:53:01.596680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:46.929 [2024-07-26 03:53:01.596694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:46.929 [2024-07-26 03:53:01.596708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:46.929 [2024-07-26 03:53:01.596723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:46.929 [2024-07-26 03:53:01.596737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:46.929 [2024-07-26 03:53:01.596751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:46.929 [2024-07-26 03:53:01.596765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:46.929 [2024-07-26 03:53:01.596790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:46.929 [2024-07-26 03:53:01.596804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:46.929 [2024-07-26 03:53:01.596835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:46.929 [2024-07-26 03:53:01.596853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:46.929 [2024-07-26 03:53:01.596867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:46.929 [2024-07-26 03:53:01.596881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:46.929 [2024-07-26 03:53:01.596896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.596910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.596924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.596939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.596953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.596967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.596982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.596996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.597992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.598006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.598020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:46.930 [2024-07-26 03:53:01.598047] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:46.930 [2024-07-26 03:53:01.598062] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f7b734ba-648f-487f-8a7d-88c9b2c09583 00:24:46.930 [2024-07-26 03:53:01.598076] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:46.930 [2024-07-26 03:53:01.598089] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:46.930 [2024-07-26 03:53:01.598119] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:46.930 [2024-07-26 03:53:01.598133] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:46.930 [2024-07-26 03:53:01.598146] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:46.930 [2024-07-26 03:53:01.598160] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:46.930 [2024-07-26 03:53:01.598173] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:46.930 [2024-07-26 03:53:01.598185] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:46.930 [2024-07-26 03:53:01.598197] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:46.930 [2024-07-26 03:53:01.598211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.930 [2024-07-26 03:53:01.598225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:46.930 [2024-07-26 03:53:01.598245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.750 ms 00:24:46.930 [2024-07-26 03:53:01.598259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.930 [2024-07-26 03:53:01.618968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.931 [2024-07-26 03:53:01.619036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:46.931 [2024-07-26 03:53:01.619059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.670 ms 00:24:46.931 [2024-07-26 03:53:01.619074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.931 [2024-07-26 03:53:01.619668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.931 [2024-07-26 03:53:01.619715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:46.931 [2024-07-26 03:53:01.619733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 00:24:46.931 [2024-07-26 03:53:01.619747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.931 [2024-07-26 03:53:01.668970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.931 [2024-07-26 03:53:01.669050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:46.931 [2024-07-26 03:53:01.669073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.931 [2024-07-26 03:53:01.669087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.931 [2024-07-26 03:53:01.669252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.931 [2024-07-26 03:53:01.669277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:46.931 [2024-07-26 03:53:01.669292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.931 [2024-07-26 03:53:01.669305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.931 [2024-07-26 03:53:01.669382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.931 [2024-07-26 03:53:01.669403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:46.931 [2024-07-26 03:53:01.669418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.931 [2024-07-26 03:53:01.669431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.931 [2024-07-26 03:53:01.669459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.931 [2024-07-26 03:53:01.669476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:46.931 [2024-07-26 03:53:01.669497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.931 [2024-07-26 03:53:01.669511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.931 [2024-07-26 03:53:01.791515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.931 [2024-07-26 03:53:01.791608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:46.931 [2024-07-26 03:53:01.791633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.931 [2024-07-26 03:53:01.791648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.190 [2024-07-26 03:53:01.896737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.190 [2024-07-26 03:53:01.896866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:47.190 [2024-07-26 03:53:01.896890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.190 [2024-07-26 03:53:01.896905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.190 [2024-07-26 03:53:01.897028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.190 [2024-07-26 03:53:01.897049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:47.190 [2024-07-26 03:53:01.897063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.190 [2024-07-26 03:53:01.897077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.190 [2024-07-26 03:53:01.897118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.190 [2024-07-26 03:53:01.897134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:47.190 [2024-07-26 03:53:01.897148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.190 [2024-07-26 03:53:01.897168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.190 [2024-07-26 03:53:01.897329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.190 [2024-07-26 03:53:01.897364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:47.190 [2024-07-26 03:53:01.897380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.190 [2024-07-26 03:53:01.897394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.190 [2024-07-26 03:53:01.897455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.190 [2024-07-26 03:53:01.897475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:47.190 [2024-07-26 03:53:01.897489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.190 [2024-07-26 03:53:01.897503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.190 [2024-07-26 03:53:01.897575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.190 [2024-07-26 03:53:01.897602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:47.190 [2024-07-26 03:53:01.897617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.190 [2024-07-26 03:53:01.897631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.190 [2024-07-26 03:53:01.897698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.190 [2024-07-26 03:53:01.897719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:47.190 [2024-07-26 03:53:01.897734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.190 [2024-07-26 03:53:01.897755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.190 [2024-07-26 03:53:01.897969] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 525.315 ms, result 0 00:24:48.565 00:24:48.565 00:24:48.565 03:53:03 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:48.821 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:24:48.821 03:53:03 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:24:48.821 03:53:03 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:24:48.821 03:53:03 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:48.821 03:53:03 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:48.821 03:53:03 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:24:48.821 03:53:03 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:24:49.078 03:53:03 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 81637 00:24:49.078 03:53:03 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 81637 ']' 00:24:49.078 03:53:03 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 81637 00:24:49.078 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (81637) - No such process 00:24:49.078 Process with pid 81637 is not found 00:24:49.078 03:53:03 ftl.ftl_trim -- common/autotest_common.sh@975 -- # echo 'Process with pid 81637 is not found' 00:24:49.078 00:24:49.078 real 1m8.886s 00:24:49.078 user 1m34.435s 00:24:49.078 sys 0m6.866s 00:24:49.078 03:53:03 ftl.ftl_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:49.078 03:53:03 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:24:49.078 ************************************ 00:24:49.078 END TEST ftl_trim 00:24:49.078 ************************************ 00:24:49.078 03:53:03 ftl -- common/autotest_common.sh@1142 -- # return 0 00:24:49.078 03:53:03 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:24:49.078 03:53:03 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:24:49.078 03:53:03 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:49.078 03:53:03 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:49.078 ************************************ 00:24:49.078 START TEST ftl_restore 00:24:49.078 ************************************ 00:24:49.078 03:53:03 ftl.ftl_restore -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:24:49.078 * Looking for test storage... 00:24:49.078 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:49.078 03:53:03 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:49.078 03:53:03 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:24:49.078 03:53:03 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:49.078 03:53:03 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:49.078 03:53:03 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:49.078 03:53:03 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:49.078 03:53:03 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:49.078 03:53:03 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:49.078 03:53:03 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:49.078 03:53:03 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:49.078 03:53:03 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:49.078 03:53:03 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:49.078 03:53:03 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:49.078 03:53:03 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:49.078 03:53:03 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:49.078 03:53:03 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:49.078 03:53:03 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:49.078 03:53:03 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:49.078 03:53:03 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:49.078 03:53:03 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:49.078 03:53:03 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:49.078 03:53:03 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:49.078 03:53:03 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:49.079 03:53:03 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:49.079 03:53:03 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:49.079 03:53:03 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:49.079 03:53:03 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:49.079 03:53:03 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:49.079 03:53:03 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:49.079 03:53:03 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:49.079 03:53:03 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:24:49.079 03:53:03 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.QPmDqCHgsJ 00:24:49.079 03:53:03 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:24:49.079 03:53:03 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:24:49.079 03:53:03 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:24:49.079 03:53:03 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:24:49.079 03:53:03 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:24:49.079 03:53:03 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:24:49.079 03:53:03 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:24:49.079 03:53:03 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:24:49.079 03:53:03 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=81896 00:24:49.079 03:53:03 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 81896 00:24:49.079 03:53:03 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:49.079 03:53:03 ftl.ftl_restore -- common/autotest_common.sh@829 -- # '[' -z 81896 ']' 00:24:49.079 03:53:03 ftl.ftl_restore -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:49.079 03:53:03 ftl.ftl_restore -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:49.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:49.079 03:53:03 ftl.ftl_restore -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:49.079 03:53:03 ftl.ftl_restore -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:49.079 03:53:03 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:24:49.336 [2024-07-26 03:53:04.031308] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:24:49.336 [2024-07-26 03:53:04.031496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81896 ] 00:24:49.336 [2024-07-26 03:53:04.205593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.594 [2024-07-26 03:53:04.411632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.529 03:53:05 ftl.ftl_restore -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:50.529 03:53:05 ftl.ftl_restore -- common/autotest_common.sh@862 -- # return 0 00:24:50.529 03:53:05 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:50.529 03:53:05 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:24:50.529 03:53:05 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:50.529 03:53:05 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:24:50.529 03:53:05 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:24:50.529 03:53:05 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:50.787 03:53:05 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:50.787 03:53:05 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:24:50.787 03:53:05 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:50.787 03:53:05 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:24:50.787 03:53:05 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:24:50.787 03:53:05 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:24:50.787 03:53:05 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:24:50.787 03:53:05 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:51.045 03:53:05 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:24:51.045 { 00:24:51.045 "name": "nvme0n1", 00:24:51.045 "aliases": [ 00:24:51.045 "4d320ebc-bbcb-4b18-ae2b-f6e0f19a227a" 00:24:51.045 ], 00:24:51.045 "product_name": "NVMe disk", 00:24:51.045 "block_size": 4096, 00:24:51.045 "num_blocks": 1310720, 00:24:51.045 "uuid": "4d320ebc-bbcb-4b18-ae2b-f6e0f19a227a", 00:24:51.045 "assigned_rate_limits": { 00:24:51.045 "rw_ios_per_sec": 0, 00:24:51.045 "rw_mbytes_per_sec": 0, 00:24:51.045 "r_mbytes_per_sec": 0, 00:24:51.045 "w_mbytes_per_sec": 0 00:24:51.045 }, 00:24:51.045 "claimed": true, 00:24:51.045 "claim_type": "read_many_write_one", 00:24:51.045 "zoned": false, 00:24:51.045 "supported_io_types": { 00:24:51.045 "read": true, 00:24:51.045 "write": true, 00:24:51.045 "unmap": true, 00:24:51.045 "flush": true, 00:24:51.045 "reset": true, 00:24:51.045 "nvme_admin": true, 00:24:51.045 "nvme_io": true, 00:24:51.045 "nvme_io_md": false, 00:24:51.045 "write_zeroes": true, 00:24:51.045 "zcopy": false, 00:24:51.045 "get_zone_info": false, 00:24:51.045 "zone_management": false, 00:24:51.045 "zone_append": false, 00:24:51.045 "compare": true, 00:24:51.045 "compare_and_write": false, 00:24:51.045 "abort": true, 00:24:51.045 "seek_hole": false, 00:24:51.045 "seek_data": false, 00:24:51.045 "copy": true, 00:24:51.045 "nvme_iov_md": false 00:24:51.045 }, 00:24:51.045 "driver_specific": { 00:24:51.045 "nvme": [ 00:24:51.045 { 00:24:51.045 "pci_address": "0000:00:11.0", 00:24:51.045 "trid": { 00:24:51.045 "trtype": "PCIe", 00:24:51.045 "traddr": "0000:00:11.0" 00:24:51.045 }, 00:24:51.045 "ctrlr_data": { 00:24:51.045 "cntlid": 0, 00:24:51.045 "vendor_id": "0x1b36", 00:24:51.045 "model_number": "QEMU NVMe Ctrl", 00:24:51.045 "serial_number": "12341", 00:24:51.045 "firmware_revision": "8.0.0", 00:24:51.045 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:51.045 "oacs": { 00:24:51.045 "security": 0, 00:24:51.045 "format": 1, 00:24:51.045 "firmware": 0, 00:24:51.045 "ns_manage": 1 00:24:51.045 }, 00:24:51.045 "multi_ctrlr": false, 00:24:51.045 "ana_reporting": false 00:24:51.045 }, 00:24:51.045 "vs": { 00:24:51.045 "nvme_version": "1.4" 00:24:51.045 }, 00:24:51.045 "ns_data": { 00:24:51.045 "id": 1, 00:24:51.045 "can_share": false 00:24:51.045 } 00:24:51.045 } 00:24:51.045 ], 00:24:51.045 "mp_policy": "active_passive" 00:24:51.045 } 00:24:51.045 } 00:24:51.045 ]' 00:24:51.045 03:53:05 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:24:51.045 03:53:05 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:24:51.045 03:53:05 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:24:51.046 03:53:05 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:24:51.046 03:53:05 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:24:51.046 03:53:05 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:24:51.046 03:53:05 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:24:51.046 03:53:05 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:51.046 03:53:05 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:24:51.046 03:53:05 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:51.046 03:53:05 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:51.304 03:53:06 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=7559885e-44ca-4cc5-88fe-1153da12f9fd 00:24:51.304 03:53:06 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:24:51.304 03:53:06 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7559885e-44ca-4cc5-88fe-1153da12f9fd 00:24:51.562 03:53:06 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:51.821 03:53:06 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=97e62318-e8f9-41d7-bc1a-29ea2404daba 00:24:51.821 03:53:06 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 97e62318-e8f9-41d7-bc1a-29ea2404daba 00:24:52.079 03:53:06 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=e2c5244c-3452-4a41-a4cb-3caddc6d0a47 00:24:52.079 03:53:06 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:24:52.079 03:53:06 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e2c5244c-3452-4a41-a4cb-3caddc6d0a47 00:24:52.079 03:53:06 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:24:52.079 03:53:06 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:52.079 03:53:06 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=e2c5244c-3452-4a41-a4cb-3caddc6d0a47 00:24:52.079 03:53:06 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:24:52.079 03:53:06 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size e2c5244c-3452-4a41-a4cb-3caddc6d0a47 00:24:52.079 03:53:06 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=e2c5244c-3452-4a41-a4cb-3caddc6d0a47 00:24:52.079 03:53:06 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:24:52.079 03:53:06 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:24:52.079 03:53:06 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:24:52.079 03:53:06 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e2c5244c-3452-4a41-a4cb-3caddc6d0a47 00:24:52.338 03:53:07 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:24:52.338 { 00:24:52.338 "name": "e2c5244c-3452-4a41-a4cb-3caddc6d0a47", 00:24:52.338 "aliases": [ 00:24:52.338 "lvs/nvme0n1p0" 00:24:52.338 ], 00:24:52.338 "product_name": "Logical Volume", 00:24:52.338 "block_size": 4096, 00:24:52.338 "num_blocks": 26476544, 00:24:52.338 "uuid": "e2c5244c-3452-4a41-a4cb-3caddc6d0a47", 00:24:52.338 "assigned_rate_limits": { 00:24:52.338 "rw_ios_per_sec": 0, 00:24:52.338 "rw_mbytes_per_sec": 0, 00:24:52.338 "r_mbytes_per_sec": 0, 00:24:52.338 "w_mbytes_per_sec": 0 00:24:52.338 }, 00:24:52.338 "claimed": false, 00:24:52.338 "zoned": false, 00:24:52.338 "supported_io_types": { 00:24:52.338 "read": true, 00:24:52.338 "write": true, 00:24:52.338 "unmap": true, 00:24:52.338 "flush": false, 00:24:52.338 "reset": true, 00:24:52.338 "nvme_admin": false, 00:24:52.338 "nvme_io": false, 00:24:52.338 "nvme_io_md": false, 00:24:52.338 "write_zeroes": true, 00:24:52.338 "zcopy": false, 00:24:52.338 "get_zone_info": false, 00:24:52.338 "zone_management": false, 00:24:52.338 "zone_append": false, 00:24:52.338 "compare": false, 00:24:52.338 "compare_and_write": false, 00:24:52.338 "abort": false, 00:24:52.338 "seek_hole": true, 00:24:52.338 "seek_data": true, 00:24:52.338 "copy": false, 00:24:52.338 "nvme_iov_md": false 00:24:52.338 }, 00:24:52.338 "driver_specific": { 00:24:52.338 "lvol": { 00:24:52.338 "lvol_store_uuid": "97e62318-e8f9-41d7-bc1a-29ea2404daba", 00:24:52.338 "base_bdev": "nvme0n1", 00:24:52.338 "thin_provision": true, 00:24:52.338 "num_allocated_clusters": 0, 00:24:52.338 "snapshot": false, 00:24:52.338 "clone": false, 00:24:52.338 "esnap_clone": false 00:24:52.338 } 00:24:52.338 } 00:24:52.338 } 00:24:52.338 ]' 00:24:52.338 03:53:07 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:24:52.338 03:53:07 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:24:52.338 03:53:07 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:24:52.597 03:53:07 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:24:52.597 03:53:07 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:24:52.597 03:53:07 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:24:52.597 03:53:07 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:24:52.597 03:53:07 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:24:52.597 03:53:07 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:52.855 03:53:07 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:52.855 03:53:07 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:52.855 03:53:07 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size e2c5244c-3452-4a41-a4cb-3caddc6d0a47 00:24:52.855 03:53:07 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=e2c5244c-3452-4a41-a4cb-3caddc6d0a47 00:24:52.855 03:53:07 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:24:52.855 03:53:07 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:24:52.855 03:53:07 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:24:52.855 03:53:07 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e2c5244c-3452-4a41-a4cb-3caddc6d0a47 00:24:53.114 03:53:07 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:24:53.114 { 00:24:53.114 "name": "e2c5244c-3452-4a41-a4cb-3caddc6d0a47", 00:24:53.114 "aliases": [ 00:24:53.114 "lvs/nvme0n1p0" 00:24:53.114 ], 00:24:53.114 "product_name": "Logical Volume", 00:24:53.114 "block_size": 4096, 00:24:53.114 "num_blocks": 26476544, 00:24:53.114 "uuid": "e2c5244c-3452-4a41-a4cb-3caddc6d0a47", 00:24:53.114 "assigned_rate_limits": { 00:24:53.114 "rw_ios_per_sec": 0, 00:24:53.114 "rw_mbytes_per_sec": 0, 00:24:53.114 "r_mbytes_per_sec": 0, 00:24:53.114 "w_mbytes_per_sec": 0 00:24:53.114 }, 00:24:53.114 "claimed": false, 00:24:53.114 "zoned": false, 00:24:53.114 "supported_io_types": { 00:24:53.114 "read": true, 00:24:53.114 "write": true, 00:24:53.114 "unmap": true, 00:24:53.114 "flush": false, 00:24:53.114 "reset": true, 00:24:53.114 "nvme_admin": false, 00:24:53.114 "nvme_io": false, 00:24:53.114 "nvme_io_md": false, 00:24:53.114 "write_zeroes": true, 00:24:53.114 "zcopy": false, 00:24:53.114 "get_zone_info": false, 00:24:53.114 "zone_management": false, 00:24:53.114 "zone_append": false, 00:24:53.114 "compare": false, 00:24:53.114 "compare_and_write": false, 00:24:53.114 "abort": false, 00:24:53.114 "seek_hole": true, 00:24:53.114 "seek_data": true, 00:24:53.114 "copy": false, 00:24:53.114 "nvme_iov_md": false 00:24:53.115 }, 00:24:53.115 "driver_specific": { 00:24:53.115 "lvol": { 00:24:53.115 "lvol_store_uuid": "97e62318-e8f9-41d7-bc1a-29ea2404daba", 00:24:53.115 "base_bdev": "nvme0n1", 00:24:53.115 "thin_provision": true, 00:24:53.115 "num_allocated_clusters": 0, 00:24:53.115 "snapshot": false, 00:24:53.115 "clone": false, 00:24:53.115 "esnap_clone": false 00:24:53.115 } 00:24:53.115 } 00:24:53.115 } 00:24:53.115 ]' 00:24:53.115 03:53:07 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:24:53.115 03:53:07 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:24:53.115 03:53:07 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:24:53.115 03:53:07 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:24:53.115 03:53:07 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:24:53.115 03:53:07 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:24:53.115 03:53:07 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:24:53.115 03:53:07 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:53.374 03:53:08 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:24:53.374 03:53:08 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size e2c5244c-3452-4a41-a4cb-3caddc6d0a47 00:24:53.374 03:53:08 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=e2c5244c-3452-4a41-a4cb-3caddc6d0a47 00:24:53.374 03:53:08 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:24:53.374 03:53:08 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:24:53.374 03:53:08 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:24:53.374 03:53:08 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e2c5244c-3452-4a41-a4cb-3caddc6d0a47 00:24:53.632 03:53:08 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:24:53.632 { 00:24:53.632 "name": "e2c5244c-3452-4a41-a4cb-3caddc6d0a47", 00:24:53.632 "aliases": [ 00:24:53.632 "lvs/nvme0n1p0" 00:24:53.632 ], 00:24:53.632 "product_name": "Logical Volume", 00:24:53.632 "block_size": 4096, 00:24:53.632 "num_blocks": 26476544, 00:24:53.632 "uuid": "e2c5244c-3452-4a41-a4cb-3caddc6d0a47", 00:24:53.632 "assigned_rate_limits": { 00:24:53.632 "rw_ios_per_sec": 0, 00:24:53.632 "rw_mbytes_per_sec": 0, 00:24:53.632 "r_mbytes_per_sec": 0, 00:24:53.632 "w_mbytes_per_sec": 0 00:24:53.632 }, 00:24:53.632 "claimed": false, 00:24:53.632 "zoned": false, 00:24:53.632 "supported_io_types": { 00:24:53.632 "read": true, 00:24:53.632 "write": true, 00:24:53.632 "unmap": true, 00:24:53.632 "flush": false, 00:24:53.632 "reset": true, 00:24:53.632 "nvme_admin": false, 00:24:53.632 "nvme_io": false, 00:24:53.632 "nvme_io_md": false, 00:24:53.632 "write_zeroes": true, 00:24:53.632 "zcopy": false, 00:24:53.632 "get_zone_info": false, 00:24:53.632 "zone_management": false, 00:24:53.632 "zone_append": false, 00:24:53.632 "compare": false, 00:24:53.632 "compare_and_write": false, 00:24:53.632 "abort": false, 00:24:53.632 "seek_hole": true, 00:24:53.632 "seek_data": true, 00:24:53.632 "copy": false, 00:24:53.633 "nvme_iov_md": false 00:24:53.633 }, 00:24:53.633 "driver_specific": { 00:24:53.633 "lvol": { 00:24:53.633 "lvol_store_uuid": "97e62318-e8f9-41d7-bc1a-29ea2404daba", 00:24:53.633 "base_bdev": "nvme0n1", 00:24:53.633 "thin_provision": true, 00:24:53.633 "num_allocated_clusters": 0, 00:24:53.633 "snapshot": false, 00:24:53.633 "clone": false, 00:24:53.633 "esnap_clone": false 00:24:53.633 } 00:24:53.633 } 00:24:53.633 } 00:24:53.633 ]' 00:24:53.633 03:53:08 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:24:53.633 03:53:08 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:24:53.633 03:53:08 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:24:53.891 03:53:08 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:24:53.891 03:53:08 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:24:53.891 03:53:08 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:24:53.891 03:53:08 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:24:53.892 03:53:08 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d e2c5244c-3452-4a41-a4cb-3caddc6d0a47 --l2p_dram_limit 10' 00:24:53.892 03:53:08 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:24:53.892 03:53:08 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:24:53.892 03:53:08 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:24:53.892 03:53:08 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:24:53.892 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:24:53.892 03:53:08 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e2c5244c-3452-4a41-a4cb-3caddc6d0a47 --l2p_dram_limit 10 -c nvc0n1p0 00:24:53.892 [2024-07-26 03:53:08.786787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.892 [2024-07-26 03:53:08.786885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:53.892 [2024-07-26 03:53:08.786921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:53.892 [2024-07-26 03:53:08.786941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.892 [2024-07-26 03:53:08.787039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.892 [2024-07-26 03:53:08.787069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:53.892 [2024-07-26 03:53:08.787094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:24:53.892 [2024-07-26 03:53:08.787113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.892 [2024-07-26 03:53:08.787148] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:53.892 [2024-07-26 03:53:08.788136] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:53.892 [2024-07-26 03:53:08.788174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.892 [2024-07-26 03:53:08.788196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:53.892 [2024-07-26 03:53:08.788210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.034 ms 00:24:53.892 [2024-07-26 03:53:08.788224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.892 [2024-07-26 03:53:08.788361] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 67f15440-6dde-4ab9-b185-89b32e8c0bb4 00:24:53.892 [2024-07-26 03:53:08.789534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.892 [2024-07-26 03:53:08.789585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:53.892 [2024-07-26 03:53:08.789608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:24:53.892 [2024-07-26 03:53:08.789621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.892 [2024-07-26 03:53:08.794627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.892 [2024-07-26 03:53:08.794674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:53.892 [2024-07-26 03:53:08.794695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.912 ms 00:24:53.892 [2024-07-26 03:53:08.794708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.892 [2024-07-26 03:53:08.794862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.892 [2024-07-26 03:53:08.794886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:53.892 [2024-07-26 03:53:08.794903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:24:53.892 [2024-07-26 03:53:08.794915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.892 [2024-07-26 03:53:08.795008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.892 [2024-07-26 03:53:08.795046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:53.892 [2024-07-26 03:53:08.795069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:24:53.892 [2024-07-26 03:53:08.795081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.892 [2024-07-26 03:53:08.795118] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:54.151 [2024-07-26 03:53:08.799883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.151 [2024-07-26 03:53:08.799934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:54.151 [2024-07-26 03:53:08.799953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.776 ms 00:24:54.151 [2024-07-26 03:53:08.799968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.151 [2024-07-26 03:53:08.800018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.151 [2024-07-26 03:53:08.800039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:54.151 [2024-07-26 03:53:08.800053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:54.151 [2024-07-26 03:53:08.800067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.151 [2024-07-26 03:53:08.800126] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:54.151 [2024-07-26 03:53:08.800296] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:54.151 [2024-07-26 03:53:08.800330] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:54.151 [2024-07-26 03:53:08.800485] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:24:54.151 [2024-07-26 03:53:08.800506] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:54.151 [2024-07-26 03:53:08.800523] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:54.151 [2024-07-26 03:53:08.800536] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:54.151 [2024-07-26 03:53:08.800556] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:54.151 [2024-07-26 03:53:08.800567] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:54.151 [2024-07-26 03:53:08.800581] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:54.151 [2024-07-26 03:53:08.800594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.151 [2024-07-26 03:53:08.800609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:54.151 [2024-07-26 03:53:08.800622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.470 ms 00:24:54.151 [2024-07-26 03:53:08.800636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.151 [2024-07-26 03:53:08.800730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.151 [2024-07-26 03:53:08.800749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:54.151 [2024-07-26 03:53:08.800762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:54.151 [2024-07-26 03:53:08.800780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.151 [2024-07-26 03:53:08.800906] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:54.151 [2024-07-26 03:53:08.800933] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:54.151 [2024-07-26 03:53:08.800957] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:54.151 [2024-07-26 03:53:08.800973] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:54.151 [2024-07-26 03:53:08.800986] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:54.151 [2024-07-26 03:53:08.801000] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:54.151 [2024-07-26 03:53:08.801012] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:54.151 [2024-07-26 03:53:08.801025] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:54.151 [2024-07-26 03:53:08.801037] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:54.151 [2024-07-26 03:53:08.801050] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:54.151 [2024-07-26 03:53:08.801062] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:54.151 [2024-07-26 03:53:08.801078] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:54.151 [2024-07-26 03:53:08.801089] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:54.151 [2024-07-26 03:53:08.801102] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:54.151 [2024-07-26 03:53:08.801113] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:54.151 [2024-07-26 03:53:08.801126] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:54.151 [2024-07-26 03:53:08.801137] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:54.151 [2024-07-26 03:53:08.801155] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:54.151 [2024-07-26 03:53:08.801166] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:54.151 [2024-07-26 03:53:08.801179] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:54.151 [2024-07-26 03:53:08.801190] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:54.151 [2024-07-26 03:53:08.801204] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:54.151 [2024-07-26 03:53:08.801216] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:54.151 [2024-07-26 03:53:08.801229] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:54.151 [2024-07-26 03:53:08.801240] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:54.151 [2024-07-26 03:53:08.801253] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:54.151 [2024-07-26 03:53:08.801264] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:54.151 [2024-07-26 03:53:08.801277] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:54.151 [2024-07-26 03:53:08.801288] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:54.151 [2024-07-26 03:53:08.801301] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:54.151 [2024-07-26 03:53:08.801311] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:54.151 [2024-07-26 03:53:08.801324] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:54.151 [2024-07-26 03:53:08.801335] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:54.151 [2024-07-26 03:53:08.801351] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:54.151 [2024-07-26 03:53:08.801362] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:54.151 [2024-07-26 03:53:08.801376] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:54.151 [2024-07-26 03:53:08.801386] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:54.151 [2024-07-26 03:53:08.801400] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:54.151 [2024-07-26 03:53:08.801411] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:54.152 [2024-07-26 03:53:08.801424] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:54.152 [2024-07-26 03:53:08.801436] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:54.152 [2024-07-26 03:53:08.801449] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:54.152 [2024-07-26 03:53:08.801460] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:54.152 [2024-07-26 03:53:08.801472] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:54.152 [2024-07-26 03:53:08.801484] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:54.152 [2024-07-26 03:53:08.801498] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:54.152 [2024-07-26 03:53:08.801509] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:54.152 [2024-07-26 03:53:08.801524] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:54.152 [2024-07-26 03:53:08.801535] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:54.152 [2024-07-26 03:53:08.801550] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:54.152 [2024-07-26 03:53:08.801562] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:54.152 [2024-07-26 03:53:08.801575] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:54.152 [2024-07-26 03:53:08.801586] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:54.152 [2024-07-26 03:53:08.801604] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:54.152 [2024-07-26 03:53:08.801622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:54.152 [2024-07-26 03:53:08.801638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:54.152 [2024-07-26 03:53:08.801650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:54.152 [2024-07-26 03:53:08.801664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:54.152 [2024-07-26 03:53:08.801677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:54.152 [2024-07-26 03:53:08.801690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:54.152 [2024-07-26 03:53:08.801702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:54.152 [2024-07-26 03:53:08.801718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:54.152 [2024-07-26 03:53:08.801730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:54.152 [2024-07-26 03:53:08.801743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:54.152 [2024-07-26 03:53:08.801755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:54.152 [2024-07-26 03:53:08.801771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:54.152 [2024-07-26 03:53:08.801783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:54.152 [2024-07-26 03:53:08.801797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:54.152 [2024-07-26 03:53:08.801810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:54.152 [2024-07-26 03:53:08.801840] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:54.152 [2024-07-26 03:53:08.801855] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:54.152 [2024-07-26 03:53:08.801870] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:54.152 [2024-07-26 03:53:08.801883] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:54.152 [2024-07-26 03:53:08.801897] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:54.152 [2024-07-26 03:53:08.801909] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:54.152 [2024-07-26 03:53:08.801925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.152 [2024-07-26 03:53:08.801939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:54.152 [2024-07-26 03:53:08.801954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.083 ms 00:24:54.152 [2024-07-26 03:53:08.801965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.152 [2024-07-26 03:53:08.802021] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:54.152 [2024-07-26 03:53:08.802045] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:56.053 [2024-07-26 03:53:10.793001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.053 [2024-07-26 03:53:10.793093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:56.053 [2024-07-26 03:53:10.793125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1990.980 ms 00:24:56.053 [2024-07-26 03:53:10.793148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.053 [2024-07-26 03:53:10.827374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.053 [2024-07-26 03:53:10.827444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:56.053 [2024-07-26 03:53:10.827471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.913 ms 00:24:56.053 [2024-07-26 03:53:10.827485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.053 [2024-07-26 03:53:10.827682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.053 [2024-07-26 03:53:10.827704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:56.053 [2024-07-26 03:53:10.827727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:56.053 [2024-07-26 03:53:10.827739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.053 [2024-07-26 03:53:10.867940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.053 [2024-07-26 03:53:10.868006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:56.053 [2024-07-26 03:53:10.868031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.098 ms 00:24:56.053 [2024-07-26 03:53:10.868044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.053 [2024-07-26 03:53:10.868111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.053 [2024-07-26 03:53:10.868128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:56.053 [2024-07-26 03:53:10.868151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:56.053 [2024-07-26 03:53:10.868163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.053 [2024-07-26 03:53:10.868565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.053 [2024-07-26 03:53:10.868597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:56.053 [2024-07-26 03:53:10.868616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:24:56.053 [2024-07-26 03:53:10.868629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.053 [2024-07-26 03:53:10.868785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.053 [2024-07-26 03:53:10.868807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:56.053 [2024-07-26 03:53:10.868841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:24:56.053 [2024-07-26 03:53:10.868856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.053 [2024-07-26 03:53:10.887030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.053 [2024-07-26 03:53:10.887084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:56.053 [2024-07-26 03:53:10.887113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.137 ms 00:24:56.053 [2024-07-26 03:53:10.887138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.053 [2024-07-26 03:53:10.901197] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:56.053 [2024-07-26 03:53:10.904049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.053 [2024-07-26 03:53:10.904099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:56.053 [2024-07-26 03:53:10.904129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.784 ms 00:24:56.053 [2024-07-26 03:53:10.904146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.312 [2024-07-26 03:53:10.978260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.312 [2024-07-26 03:53:10.978347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:56.312 [2024-07-26 03:53:10.978372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.065 ms 00:24:56.312 [2024-07-26 03:53:10.978388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.312 [2024-07-26 03:53:10.978643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.312 [2024-07-26 03:53:10.978691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:56.312 [2024-07-26 03:53:10.978708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:24:56.312 [2024-07-26 03:53:10.978726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.312 [2024-07-26 03:53:11.011915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.312 [2024-07-26 03:53:11.012014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:56.312 [2024-07-26 03:53:11.012042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.103 ms 00:24:56.312 [2024-07-26 03:53:11.012065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.312 [2024-07-26 03:53:11.044449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.312 [2024-07-26 03:53:11.044508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:56.312 [2024-07-26 03:53:11.044529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.295 ms 00:24:56.312 [2024-07-26 03:53:11.044544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.312 [2024-07-26 03:53:11.045299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.312 [2024-07-26 03:53:11.045339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:56.312 [2024-07-26 03:53:11.045358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.701 ms 00:24:56.312 [2024-07-26 03:53:11.045373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.312 [2024-07-26 03:53:11.134810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.312 [2024-07-26 03:53:11.134894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:56.312 [2024-07-26 03:53:11.134918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.364 ms 00:24:56.312 [2024-07-26 03:53:11.134938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.312 [2024-07-26 03:53:11.168061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.312 [2024-07-26 03:53:11.168119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:56.312 [2024-07-26 03:53:11.168140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.064 ms 00:24:56.312 [2024-07-26 03:53:11.168156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.312 [2024-07-26 03:53:11.200734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.312 [2024-07-26 03:53:11.200808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:56.312 [2024-07-26 03:53:11.200849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.523 ms 00:24:56.312 [2024-07-26 03:53:11.200866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.570 [2024-07-26 03:53:11.233551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.571 [2024-07-26 03:53:11.233607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:56.571 [2024-07-26 03:53:11.233628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.629 ms 00:24:56.571 [2024-07-26 03:53:11.233643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.571 [2024-07-26 03:53:11.233703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.571 [2024-07-26 03:53:11.233727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:56.571 [2024-07-26 03:53:11.233742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:56.571 [2024-07-26 03:53:11.233759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.571 [2024-07-26 03:53:11.233899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.571 [2024-07-26 03:53:11.233930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:56.571 [2024-07-26 03:53:11.233945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:24:56.571 [2024-07-26 03:53:11.233959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.571 [2024-07-26 03:53:11.235126] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2447.831 ms, result 0 00:24:56.571 { 00:24:56.571 "name": "ftl0", 00:24:56.571 "uuid": "67f15440-6dde-4ab9-b185-89b32e8c0bb4" 00:24:56.571 } 00:24:56.571 03:53:11 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:24:56.571 03:53:11 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:56.829 03:53:11 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:24:56.829 03:53:11 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:57.088 [2024-07-26 03:53:11.802648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.088 [2024-07-26 03:53:11.802715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:57.088 [2024-07-26 03:53:11.802742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:57.088 [2024-07-26 03:53:11.802765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.088 [2024-07-26 03:53:11.802806] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:57.088 [2024-07-26 03:53:11.806336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.088 [2024-07-26 03:53:11.806382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:57.088 [2024-07-26 03:53:11.806400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.488 ms 00:24:57.088 [2024-07-26 03:53:11.806415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.088 [2024-07-26 03:53:11.806769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.088 [2024-07-26 03:53:11.806827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:57.088 [2024-07-26 03:53:11.806858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:24:57.088 [2024-07-26 03:53:11.806875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.088 [2024-07-26 03:53:11.810241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.088 [2024-07-26 03:53:11.810292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:57.088 [2024-07-26 03:53:11.810311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.339 ms 00:24:57.088 [2024-07-26 03:53:11.810326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.088 [2024-07-26 03:53:11.817153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.088 [2024-07-26 03:53:11.817236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:57.088 [2024-07-26 03:53:11.817278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.793 ms 00:24:57.088 [2024-07-26 03:53:11.817298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.088 [2024-07-26 03:53:11.849853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.088 [2024-07-26 03:53:11.849911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:57.088 [2024-07-26 03:53:11.849938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.446 ms 00:24:57.088 [2024-07-26 03:53:11.849955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.088 [2024-07-26 03:53:11.869150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.088 [2024-07-26 03:53:11.869215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:57.088 [2024-07-26 03:53:11.869236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.134 ms 00:24:57.088 [2024-07-26 03:53:11.869252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.088 [2024-07-26 03:53:11.869453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.088 [2024-07-26 03:53:11.869482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:57.088 [2024-07-26 03:53:11.869497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:24:57.088 [2024-07-26 03:53:11.869512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.088 [2024-07-26 03:53:11.901706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.088 [2024-07-26 03:53:11.901772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:57.088 [2024-07-26 03:53:11.901793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.164 ms 00:24:57.088 [2024-07-26 03:53:11.901809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.088 [2024-07-26 03:53:11.933949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.088 [2024-07-26 03:53:11.934024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:57.088 [2024-07-26 03:53:11.934044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.062 ms 00:24:57.088 [2024-07-26 03:53:11.934059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.088 [2024-07-26 03:53:11.965670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.088 [2024-07-26 03:53:11.965733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:57.088 [2024-07-26 03:53:11.965762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.555 ms 00:24:57.088 [2024-07-26 03:53:11.965779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.409 [2024-07-26 03:53:11.997484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.409 [2024-07-26 03:53:11.997540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:57.409 [2024-07-26 03:53:11.997560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.549 ms 00:24:57.409 [2024-07-26 03:53:11.997574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.409 [2024-07-26 03:53:11.997628] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:57.409 [2024-07-26 03:53:11.997658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.997678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.997694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.997707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.997721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.997734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.997748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.997762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.997779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.997792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.997813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.997841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.997857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.997871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.997885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.997898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.997912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.997925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.997939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.997952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.997968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.997981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.997996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.998009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.998025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.998038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.998053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.998066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.998080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.998093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.998107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.998120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.998135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.998148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.998162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.998175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.998190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.998202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.998217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.998230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.998246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.998260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.998275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.998287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.998302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:57.409 [2024-07-26 03:53:11.998314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.998998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.999012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.999033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.999047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.999059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.999074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.999088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.999105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.999118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:57.410 [2024-07-26 03:53:11.999142] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:57.410 [2024-07-26 03:53:11.999155] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 67f15440-6dde-4ab9-b185-89b32e8c0bb4 00:24:57.410 [2024-07-26 03:53:11.999169] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:57.410 [2024-07-26 03:53:11.999181] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:57.410 [2024-07-26 03:53:11.999197] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:57.410 [2024-07-26 03:53:11.999209] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:57.410 [2024-07-26 03:53:11.999222] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:57.410 [2024-07-26 03:53:11.999234] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:57.410 [2024-07-26 03:53:11.999248] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:57.410 [2024-07-26 03:53:11.999259] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:57.410 [2024-07-26 03:53:11.999271] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:57.410 [2024-07-26 03:53:11.999284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.410 [2024-07-26 03:53:11.999298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:57.410 [2024-07-26 03:53:11.999312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.658 ms 00:24:57.410 [2024-07-26 03:53:11.999330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.410 [2024-07-26 03:53:12.016611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.410 [2024-07-26 03:53:12.016663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:57.410 [2024-07-26 03:53:12.016683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.210 ms 00:24:57.410 [2024-07-26 03:53:12.016699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.410 [2024-07-26 03:53:12.017162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.410 [2024-07-26 03:53:12.017201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:57.410 [2024-07-26 03:53:12.017223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:24:57.410 [2024-07-26 03:53:12.017238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.410 [2024-07-26 03:53:12.071259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.410 [2024-07-26 03:53:12.071338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:57.410 [2024-07-26 03:53:12.071359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.410 [2024-07-26 03:53:12.071374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.410 [2024-07-26 03:53:12.071465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.410 [2024-07-26 03:53:12.071485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:57.410 [2024-07-26 03:53:12.071502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.410 [2024-07-26 03:53:12.071516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.410 [2024-07-26 03:53:12.071673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.410 [2024-07-26 03:53:12.071700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:57.410 [2024-07-26 03:53:12.071715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.410 [2024-07-26 03:53:12.071729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.410 [2024-07-26 03:53:12.071757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.410 [2024-07-26 03:53:12.071778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:57.410 [2024-07-26 03:53:12.071791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.410 [2024-07-26 03:53:12.071808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.410 [2024-07-26 03:53:12.174713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.410 [2024-07-26 03:53:12.174788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:57.410 [2024-07-26 03:53:12.174810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.411 [2024-07-26 03:53:12.174839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.411 [2024-07-26 03:53:12.262710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.411 [2024-07-26 03:53:12.262790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:57.411 [2024-07-26 03:53:12.262848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.411 [2024-07-26 03:53:12.262867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.411 [2024-07-26 03:53:12.263010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.411 [2024-07-26 03:53:12.263035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:57.411 [2024-07-26 03:53:12.263049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.411 [2024-07-26 03:53:12.263064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.411 [2024-07-26 03:53:12.263134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.411 [2024-07-26 03:53:12.263160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:57.411 [2024-07-26 03:53:12.263174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.411 [2024-07-26 03:53:12.263188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.411 [2024-07-26 03:53:12.263322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.411 [2024-07-26 03:53:12.263358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:57.411 [2024-07-26 03:53:12.263374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.411 [2024-07-26 03:53:12.263389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.411 [2024-07-26 03:53:12.263445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.411 [2024-07-26 03:53:12.263487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:57.411 [2024-07-26 03:53:12.263502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.411 [2024-07-26 03:53:12.263516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.411 [2024-07-26 03:53:12.263570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.411 [2024-07-26 03:53:12.263595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:57.411 [2024-07-26 03:53:12.263608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.411 [2024-07-26 03:53:12.263622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.411 [2024-07-26 03:53:12.263681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.411 [2024-07-26 03:53:12.263705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:57.411 [2024-07-26 03:53:12.263718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.411 [2024-07-26 03:53:12.263732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.411 [2024-07-26 03:53:12.263920] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 461.249 ms, result 0 00:24:57.411 true 00:24:57.411 03:53:12 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 81896 00:24:57.411 03:53:12 ftl.ftl_restore -- common/autotest_common.sh@948 -- # '[' -z 81896 ']' 00:24:57.411 03:53:12 ftl.ftl_restore -- common/autotest_common.sh@952 -- # kill -0 81896 00:24:57.411 03:53:12 ftl.ftl_restore -- common/autotest_common.sh@953 -- # uname 00:24:57.411 03:53:12 ftl.ftl_restore -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:57.411 03:53:12 ftl.ftl_restore -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81896 00:24:57.669 03:53:12 ftl.ftl_restore -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:57.669 03:53:12 ftl.ftl_restore -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:57.669 killing process with pid 81896 00:24:57.669 03:53:12 ftl.ftl_restore -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81896' 00:24:57.669 03:53:12 ftl.ftl_restore -- common/autotest_common.sh@967 -- # kill 81896 00:24:57.669 03:53:12 ftl.ftl_restore -- common/autotest_common.sh@972 -- # wait 81896 00:25:02.941 03:53:17 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:25:07.141 262144+0 records in 00:25:07.141 262144+0 records out 00:25:07.141 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.89991 s, 219 MB/s 00:25:07.141 03:53:21 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:09.693 03:53:24 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:09.693 [2024-07-26 03:53:24.461092] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:25:09.693 [2024-07-26 03:53:24.461292] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82140 ] 00:25:09.952 [2024-07-26 03:53:24.646072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.952 [2024-07-26 03:53:24.844419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.531 [2024-07-26 03:53:25.169320] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:10.531 [2024-07-26 03:53:25.169404] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:10.531 [2024-07-26 03:53:25.332568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.531 [2024-07-26 03:53:25.332637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:10.531 [2024-07-26 03:53:25.332658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:10.531 [2024-07-26 03:53:25.332671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.531 [2024-07-26 03:53:25.332738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.531 [2024-07-26 03:53:25.332757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:10.531 [2024-07-26 03:53:25.332770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:25:10.531 [2024-07-26 03:53:25.332785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.531 [2024-07-26 03:53:25.332842] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:10.531 [2024-07-26 03:53:25.333799] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:10.531 [2024-07-26 03:53:25.333879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.531 [2024-07-26 03:53:25.333898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:10.531 [2024-07-26 03:53:25.333911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.068 ms 00:25:10.531 [2024-07-26 03:53:25.333924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.531 [2024-07-26 03:53:25.335334] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:10.531 [2024-07-26 03:53:25.352320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.531 [2024-07-26 03:53:25.352370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:10.531 [2024-07-26 03:53:25.352389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.993 ms 00:25:10.531 [2024-07-26 03:53:25.352400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.531 [2024-07-26 03:53:25.352479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.532 [2024-07-26 03:53:25.352502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:10.532 [2024-07-26 03:53:25.352516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:25:10.532 [2024-07-26 03:53:25.352526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.532 [2024-07-26 03:53:25.357262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.532 [2024-07-26 03:53:25.357310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:10.532 [2024-07-26 03:53:25.357327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.636 ms 00:25:10.532 [2024-07-26 03:53:25.357338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.532 [2024-07-26 03:53:25.357444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.532 [2024-07-26 03:53:25.357463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:10.532 [2024-07-26 03:53:25.357476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:25:10.532 [2024-07-26 03:53:25.357486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.532 [2024-07-26 03:53:25.357560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.532 [2024-07-26 03:53:25.357579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:10.532 [2024-07-26 03:53:25.357591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:25:10.532 [2024-07-26 03:53:25.357602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.532 [2024-07-26 03:53:25.357636] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:10.532 [2024-07-26 03:53:25.361972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.532 [2024-07-26 03:53:25.362020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:10.532 [2024-07-26 03:53:25.362038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.345 ms 00:25:10.532 [2024-07-26 03:53:25.362058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.532 [2024-07-26 03:53:25.362113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.532 [2024-07-26 03:53:25.362139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:10.532 [2024-07-26 03:53:25.362153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:10.532 [2024-07-26 03:53:25.362163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.532 [2024-07-26 03:53:25.362219] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:10.532 [2024-07-26 03:53:25.362253] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:10.532 [2024-07-26 03:53:25.362305] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:10.532 [2024-07-26 03:53:25.362335] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:25:10.532 [2024-07-26 03:53:25.362452] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:10.532 [2024-07-26 03:53:25.362473] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:10.532 [2024-07-26 03:53:25.362492] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:25:10.532 [2024-07-26 03:53:25.362516] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:10.532 [2024-07-26 03:53:25.362530] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:10.532 [2024-07-26 03:53:25.362543] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:10.532 [2024-07-26 03:53:25.362554] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:10.532 [2024-07-26 03:53:25.362571] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:10.532 [2024-07-26 03:53:25.362586] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:10.532 [2024-07-26 03:53:25.362598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.532 [2024-07-26 03:53:25.362614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:10.532 [2024-07-26 03:53:25.362640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:25:10.532 [2024-07-26 03:53:25.362659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.532 [2024-07-26 03:53:25.362767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.532 [2024-07-26 03:53:25.362791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:10.532 [2024-07-26 03:53:25.362807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:25:10.532 [2024-07-26 03:53:25.362837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.532 [2024-07-26 03:53:25.362992] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:10.532 [2024-07-26 03:53:25.363024] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:10.532 [2024-07-26 03:53:25.363047] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:10.532 [2024-07-26 03:53:25.363059] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:10.532 [2024-07-26 03:53:25.363070] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:10.532 [2024-07-26 03:53:25.363086] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:10.532 [2024-07-26 03:53:25.363103] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:10.532 [2024-07-26 03:53:25.363114] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:10.532 [2024-07-26 03:53:25.363124] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:10.532 [2024-07-26 03:53:25.363134] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:10.532 [2024-07-26 03:53:25.363145] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:10.532 [2024-07-26 03:53:25.363161] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:10.532 [2024-07-26 03:53:25.363179] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:10.532 [2024-07-26 03:53:25.363191] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:10.532 [2024-07-26 03:53:25.363201] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:10.532 [2024-07-26 03:53:25.363211] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:10.532 [2024-07-26 03:53:25.363223] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:10.532 [2024-07-26 03:53:25.363241] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:10.532 [2024-07-26 03:53:25.363252] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:10.532 [2024-07-26 03:53:25.363265] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:10.532 [2024-07-26 03:53:25.363289] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:10.532 [2024-07-26 03:53:25.363300] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:10.532 [2024-07-26 03:53:25.363319] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:10.532 [2024-07-26 03:53:25.363335] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:10.532 [2024-07-26 03:53:25.363345] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:10.532 [2024-07-26 03:53:25.363355] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:10.532 [2024-07-26 03:53:25.363366] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:10.532 [2024-07-26 03:53:25.363379] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:10.532 [2024-07-26 03:53:25.363396] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:10.532 [2024-07-26 03:53:25.363408] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:10.532 [2024-07-26 03:53:25.363418] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:10.532 [2024-07-26 03:53:25.363428] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:10.532 [2024-07-26 03:53:25.363438] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:10.532 [2024-07-26 03:53:25.363449] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:10.532 [2024-07-26 03:53:25.363466] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:10.532 [2024-07-26 03:53:25.363478] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:10.532 [2024-07-26 03:53:25.363488] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:10.532 [2024-07-26 03:53:25.363498] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:10.532 [2024-07-26 03:53:25.363509] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:10.532 [2024-07-26 03:53:25.363521] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:10.532 [2024-07-26 03:53:25.363540] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:10.532 [2024-07-26 03:53:25.363553] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:10.532 [2024-07-26 03:53:25.363563] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:10.532 [2024-07-26 03:53:25.363573] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:10.532 [2024-07-26 03:53:25.363584] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:10.532 [2024-07-26 03:53:25.363598] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:10.532 [2024-07-26 03:53:25.363615] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:10.532 [2024-07-26 03:53:25.363627] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:10.532 [2024-07-26 03:53:25.363638] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:10.532 [2024-07-26 03:53:25.363648] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:10.532 [2024-07-26 03:53:25.363658] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:10.532 [2024-07-26 03:53:25.363669] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:10.532 [2024-07-26 03:53:25.363687] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:10.532 [2024-07-26 03:53:25.363703] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:10.532 [2024-07-26 03:53:25.363718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:10.532 [2024-07-26 03:53:25.363730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:10.533 [2024-07-26 03:53:25.363741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:10.533 [2024-07-26 03:53:25.363757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:10.533 [2024-07-26 03:53:25.363777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:10.533 [2024-07-26 03:53:25.363789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:10.533 [2024-07-26 03:53:25.363801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:10.533 [2024-07-26 03:53:25.363811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:10.533 [2024-07-26 03:53:25.363850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:10.533 [2024-07-26 03:53:25.363863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:10.533 [2024-07-26 03:53:25.363874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:10.533 [2024-07-26 03:53:25.363886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:10.533 [2024-07-26 03:53:25.363899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:10.533 [2024-07-26 03:53:25.363918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:10.533 [2024-07-26 03:53:25.363931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:10.533 [2024-07-26 03:53:25.363942] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:10.533 [2024-07-26 03:53:25.363954] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:10.533 [2024-07-26 03:53:25.363982] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:10.533 [2024-07-26 03:53:25.363999] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:10.533 [2024-07-26 03:53:25.364011] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:10.533 [2024-07-26 03:53:25.364023] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:10.533 [2024-07-26 03:53:25.364036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.533 [2024-07-26 03:53:25.364056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:10.533 [2024-07-26 03:53:25.364069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.115 ms 00:25:10.533 [2024-07-26 03:53:25.364080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.533 [2024-07-26 03:53:25.404572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.533 [2024-07-26 03:53:25.404651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:10.533 [2024-07-26 03:53:25.404680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.419 ms 00:25:10.533 [2024-07-26 03:53:25.404693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.533 [2024-07-26 03:53:25.404848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.533 [2024-07-26 03:53:25.404877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:10.533 [2024-07-26 03:53:25.404894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:25:10.533 [2024-07-26 03:53:25.404905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.792 [2024-07-26 03:53:25.445558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.792 [2024-07-26 03:53:25.445634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:10.792 [2024-07-26 03:53:25.445655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.545 ms 00:25:10.792 [2024-07-26 03:53:25.445667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.792 [2024-07-26 03:53:25.445769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.792 [2024-07-26 03:53:25.445789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:10.792 [2024-07-26 03:53:25.445801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:10.792 [2024-07-26 03:53:25.445851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.792 [2024-07-26 03:53:25.446339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.792 [2024-07-26 03:53:25.446376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:10.792 [2024-07-26 03:53:25.446401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:25:10.792 [2024-07-26 03:53:25.446417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.792 [2024-07-26 03:53:25.446598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.792 [2024-07-26 03:53:25.446620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:10.792 [2024-07-26 03:53:25.446652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:25:10.792 [2024-07-26 03:53:25.446673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.792 [2024-07-26 03:53:25.464059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.792 [2024-07-26 03:53:25.464121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:10.792 [2024-07-26 03:53:25.464146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.344 ms 00:25:10.792 [2024-07-26 03:53:25.464175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.792 [2024-07-26 03:53:25.481223] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:25:10.792 [2024-07-26 03:53:25.481284] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:10.792 [2024-07-26 03:53:25.481307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.792 [2024-07-26 03:53:25.481320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:10.792 [2024-07-26 03:53:25.481334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.948 ms 00:25:10.792 [2024-07-26 03:53:25.481351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.792 [2024-07-26 03:53:25.512448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.792 [2024-07-26 03:53:25.512505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:10.792 [2024-07-26 03:53:25.512531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.041 ms 00:25:10.792 [2024-07-26 03:53:25.512543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.792 [2024-07-26 03:53:25.529182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.792 [2024-07-26 03:53:25.529238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:10.792 [2024-07-26 03:53:25.529257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.554 ms 00:25:10.792 [2024-07-26 03:53:25.529275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.792 [2024-07-26 03:53:25.545474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.792 [2024-07-26 03:53:25.545528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:10.792 [2024-07-26 03:53:25.545546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.140 ms 00:25:10.792 [2024-07-26 03:53:25.545558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.792 [2024-07-26 03:53:25.546385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.792 [2024-07-26 03:53:25.546426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:10.792 [2024-07-26 03:53:25.546442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:25:10.792 [2024-07-26 03:53:25.546454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.792 [2024-07-26 03:53:25.622036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.792 [2024-07-26 03:53:25.622119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:10.792 [2024-07-26 03:53:25.622150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.554 ms 00:25:10.792 [2024-07-26 03:53:25.622165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.792 [2024-07-26 03:53:25.635463] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:10.792 [2024-07-26 03:53:25.638201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.792 [2024-07-26 03:53:25.638240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:10.792 [2024-07-26 03:53:25.638258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.949 ms 00:25:10.792 [2024-07-26 03:53:25.638270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.792 [2024-07-26 03:53:25.638384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.792 [2024-07-26 03:53:25.638405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:10.792 [2024-07-26 03:53:25.638418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:10.792 [2024-07-26 03:53:25.638429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.792 [2024-07-26 03:53:25.638521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.792 [2024-07-26 03:53:25.638549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:10.792 [2024-07-26 03:53:25.638562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:10.792 [2024-07-26 03:53:25.638573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.792 [2024-07-26 03:53:25.638605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.792 [2024-07-26 03:53:25.638620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:10.792 [2024-07-26 03:53:25.638649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:10.793 [2024-07-26 03:53:25.638660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.793 [2024-07-26 03:53:25.638702] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:10.793 [2024-07-26 03:53:25.638719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.793 [2024-07-26 03:53:25.638729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:10.793 [2024-07-26 03:53:25.638745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:25:10.793 [2024-07-26 03:53:25.638756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.793 [2024-07-26 03:53:25.671326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.793 [2024-07-26 03:53:25.671387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:10.793 [2024-07-26 03:53:25.671407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.541 ms 00:25:10.793 [2024-07-26 03:53:25.671419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.793 [2024-07-26 03:53:25.671522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.793 [2024-07-26 03:53:25.671548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:10.793 [2024-07-26 03:53:25.671561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:25:10.793 [2024-07-26 03:53:25.671572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.793 [2024-07-26 03:53:25.672900] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 339.799 ms, result 0 00:25:45.475  Copying: 29/1024 [MB] (29 MBps) Copying: 59/1024 [MB] (29 MBps) Copying: 88/1024 [MB] (29 MBps) Copying: 120/1024 [MB] (31 MBps) Copying: 152/1024 [MB] (32 MBps) Copying: 182/1024 [MB] (30 MBps) Copying: 212/1024 [MB] (29 MBps) Copying: 243/1024 [MB] (30 MBps) Copying: 274/1024 [MB] (31 MBps) Copying: 306/1024 [MB] (32 MBps) Copying: 336/1024 [MB] (29 MBps) Copying: 367/1024 [MB] (31 MBps) Copying: 398/1024 [MB] (31 MBps) Copying: 425/1024 [MB] (26 MBps) Copying: 456/1024 [MB] (30 MBps) Copying: 486/1024 [MB] (30 MBps) Copying: 514/1024 [MB] (27 MBps) Copying: 542/1024 [MB] (28 MBps) Copying: 574/1024 [MB] (31 MBps) Copying: 603/1024 [MB] (29 MBps) Copying: 630/1024 [MB] (27 MBps) Copying: 659/1024 [MB] (28 MBps) Copying: 688/1024 [MB] (28 MBps) Copying: 717/1024 [MB] (28 MBps) Copying: 745/1024 [MB] (28 MBps) Copying: 775/1024 [MB] (29 MBps) Copying: 804/1024 [MB] (29 MBps) Copying: 832/1024 [MB] (28 MBps) Copying: 863/1024 [MB] (30 MBps) Copying: 894/1024 [MB] (30 MBps) Copying: 924/1024 [MB] (29 MBps) Copying: 953/1024 [MB] (29 MBps) Copying: 982/1024 [MB] (28 MBps) Copying: 1013/1024 [MB] (30 MBps) Copying: 1024/1024 [MB] (average 29 MBps)[2024-07-26 03:54:00.090101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.475 [2024-07-26 03:54:00.090175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:45.475 [2024-07-26 03:54:00.090201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:45.475 [2024-07-26 03:54:00.090216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.475 [2024-07-26 03:54:00.090251] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:45.475 [2024-07-26 03:54:00.094349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.475 [2024-07-26 03:54:00.094399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:45.475 [2024-07-26 03:54:00.094419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.069 ms 00:25:45.475 [2024-07-26 03:54:00.094433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.475 [2024-07-26 03:54:00.096126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.475 [2024-07-26 03:54:00.096178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:45.475 [2024-07-26 03:54:00.096199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.645 ms 00:25:45.475 [2024-07-26 03:54:00.096213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.475 [2024-07-26 03:54:00.115250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.475 [2024-07-26 03:54:00.115321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:45.475 [2024-07-26 03:54:00.115344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.008 ms 00:25:45.475 [2024-07-26 03:54:00.115358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.475 [2024-07-26 03:54:00.123720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.475 [2024-07-26 03:54:00.123780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:45.475 [2024-07-26 03:54:00.123800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.307 ms 00:25:45.475 [2024-07-26 03:54:00.123826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.475 [2024-07-26 03:54:00.163042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.475 [2024-07-26 03:54:00.163122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:45.475 [2024-07-26 03:54:00.163145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.111 ms 00:25:45.475 [2024-07-26 03:54:00.163160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.475 [2024-07-26 03:54:00.184743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.475 [2024-07-26 03:54:00.184856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:45.475 [2024-07-26 03:54:00.184883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.502 ms 00:25:45.475 [2024-07-26 03:54:00.184897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.475 [2024-07-26 03:54:00.185153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.475 [2024-07-26 03:54:00.185182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:45.475 [2024-07-26 03:54:00.185198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:25:45.475 [2024-07-26 03:54:00.185219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.475 [2024-07-26 03:54:00.221841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.475 [2024-07-26 03:54:00.221913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:45.475 [2024-07-26 03:54:00.221934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.594 ms 00:25:45.475 [2024-07-26 03:54:00.221946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.475 [2024-07-26 03:54:00.253997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.475 [2024-07-26 03:54:00.254074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:45.475 [2024-07-26 03:54:00.254094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.984 ms 00:25:45.475 [2024-07-26 03:54:00.254105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.475 [2024-07-26 03:54:00.285151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.475 [2024-07-26 03:54:00.285226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:45.475 [2024-07-26 03:54:00.285248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.982 ms 00:25:45.475 [2024-07-26 03:54:00.285277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.475 [2024-07-26 03:54:00.316991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.475 [2024-07-26 03:54:00.317056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:45.475 [2024-07-26 03:54:00.317077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.582 ms 00:25:45.475 [2024-07-26 03:54:00.317088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.475 [2024-07-26 03:54:00.317148] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:45.475 [2024-07-26 03:54:00.317174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:45.475 [2024-07-26 03:54:00.317188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:45.475 [2024-07-26 03:54:00.317200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:45.475 [2024-07-26 03:54:00.317211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:45.475 [2024-07-26 03:54:00.317223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:45.475 [2024-07-26 03:54:00.317235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:45.475 [2024-07-26 03:54:00.317247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:45.475 [2024-07-26 03:54:00.317258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:45.475 [2024-07-26 03:54:00.317270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.317989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.318001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.318012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.318024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.318036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.318047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.318058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.318069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.318081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.318092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.318103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.318114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.318125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.318148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.318162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.318174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.318185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.318197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.318208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.318219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.318231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.318242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.318253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:45.476 [2024-07-26 03:54:00.318265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:45.477 [2024-07-26 03:54:00.318276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:45.477 [2024-07-26 03:54:00.318288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:45.477 [2024-07-26 03:54:00.318299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:45.477 [2024-07-26 03:54:00.318311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:45.477 [2024-07-26 03:54:00.318322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:45.477 [2024-07-26 03:54:00.318336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:45.477 [2024-07-26 03:54:00.318348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:45.477 [2024-07-26 03:54:00.318363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:45.477 [2024-07-26 03:54:00.318375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:45.477 [2024-07-26 03:54:00.318387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:45.477 [2024-07-26 03:54:00.318408] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:45.477 [2024-07-26 03:54:00.318419] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 67f15440-6dde-4ab9-b185-89b32e8c0bb4 00:25:45.477 [2024-07-26 03:54:00.318430] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:45.477 [2024-07-26 03:54:00.318454] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:45.477 [2024-07-26 03:54:00.318467] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:45.477 [2024-07-26 03:54:00.318479] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:45.477 [2024-07-26 03:54:00.318489] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:45.477 [2024-07-26 03:54:00.318500] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:45.477 [2024-07-26 03:54:00.318511] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:45.477 [2024-07-26 03:54:00.318521] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:45.477 [2024-07-26 03:54:00.318531] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:45.477 [2024-07-26 03:54:00.318542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.477 [2024-07-26 03:54:00.318553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:45.477 [2024-07-26 03:54:00.318569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.397 ms 00:25:45.477 [2024-07-26 03:54:00.318585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.477 [2024-07-26 03:54:00.335457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.477 [2024-07-26 03:54:00.335510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:45.477 [2024-07-26 03:54:00.335528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.820 ms 00:25:45.477 [2024-07-26 03:54:00.335555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.477 [2024-07-26 03:54:00.336037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.477 [2024-07-26 03:54:00.336068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:45.477 [2024-07-26 03:54:00.336083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:25:45.477 [2024-07-26 03:54:00.336094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.477 [2024-07-26 03:54:00.373433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.477 [2024-07-26 03:54:00.373500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:45.477 [2024-07-26 03:54:00.373526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.477 [2024-07-26 03:54:00.373548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.477 [2024-07-26 03:54:00.373664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.477 [2024-07-26 03:54:00.373690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:45.477 [2024-07-26 03:54:00.373711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.477 [2024-07-26 03:54:00.373731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.477 [2024-07-26 03:54:00.373942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.477 [2024-07-26 03:54:00.373974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:45.477 [2024-07-26 03:54:00.373998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.477 [2024-07-26 03:54:00.374017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.477 [2024-07-26 03:54:00.374055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.477 [2024-07-26 03:54:00.374078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:45.477 [2024-07-26 03:54:00.374100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.477 [2024-07-26 03:54:00.374119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.735 [2024-07-26 03:54:00.475074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.735 [2024-07-26 03:54:00.475143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:45.735 [2024-07-26 03:54:00.475162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.735 [2024-07-26 03:54:00.475174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.735 [2024-07-26 03:54:00.561239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.735 [2024-07-26 03:54:00.561310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:45.735 [2024-07-26 03:54:00.561336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.736 [2024-07-26 03:54:00.561348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.736 [2024-07-26 03:54:00.561461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.736 [2024-07-26 03:54:00.561484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:45.736 [2024-07-26 03:54:00.561497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.736 [2024-07-26 03:54:00.561507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.736 [2024-07-26 03:54:00.561561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.736 [2024-07-26 03:54:00.561578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:45.736 [2024-07-26 03:54:00.561590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.736 [2024-07-26 03:54:00.561600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.736 [2024-07-26 03:54:00.561720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.736 [2024-07-26 03:54:00.561742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:45.736 [2024-07-26 03:54:00.561765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.736 [2024-07-26 03:54:00.561776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.736 [2024-07-26 03:54:00.561856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.736 [2024-07-26 03:54:00.561876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:45.736 [2024-07-26 03:54:00.561888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.736 [2024-07-26 03:54:00.561899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.736 [2024-07-26 03:54:00.561945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.736 [2024-07-26 03:54:00.561965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:45.736 [2024-07-26 03:54:00.561985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.736 [2024-07-26 03:54:00.561995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.736 [2024-07-26 03:54:00.562046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.736 [2024-07-26 03:54:00.562063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:45.736 [2024-07-26 03:54:00.562074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.736 [2024-07-26 03:54:00.562085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.736 [2024-07-26 03:54:00.562228] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 472.096 ms, result 0 00:25:47.637 00:25:47.637 00:25:47.637 03:54:02 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:25:47.637 [2024-07-26 03:54:02.174976] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:25:47.637 [2024-07-26 03:54:02.175149] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82515 ] 00:25:47.637 [2024-07-26 03:54:02.348436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.895 [2024-07-26 03:54:02.585805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:48.153 [2024-07-26 03:54:02.929854] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:48.153 [2024-07-26 03:54:02.929933] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:48.413 [2024-07-26 03:54:03.092390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.413 [2024-07-26 03:54:03.092464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:48.413 [2024-07-26 03:54:03.092487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:48.413 [2024-07-26 03:54:03.092499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.413 [2024-07-26 03:54:03.092570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.413 [2024-07-26 03:54:03.092590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:48.413 [2024-07-26 03:54:03.092603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:25:48.413 [2024-07-26 03:54:03.092618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.413 [2024-07-26 03:54:03.092654] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:48.413 [2024-07-26 03:54:03.093639] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:48.414 [2024-07-26 03:54:03.093677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.414 [2024-07-26 03:54:03.093691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:48.414 [2024-07-26 03:54:03.093704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.035 ms 00:25:48.414 [2024-07-26 03:54:03.093715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.414 [2024-07-26 03:54:03.094900] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:48.414 [2024-07-26 03:54:03.111575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.414 [2024-07-26 03:54:03.111641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:48.414 [2024-07-26 03:54:03.111662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.674 ms 00:25:48.414 [2024-07-26 03:54:03.111674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.414 [2024-07-26 03:54:03.111768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.414 [2024-07-26 03:54:03.111792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:48.414 [2024-07-26 03:54:03.111805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:48.414 [2024-07-26 03:54:03.111838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.414 [2024-07-26 03:54:03.116637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.414 [2024-07-26 03:54:03.116699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:48.414 [2024-07-26 03:54:03.116717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.681 ms 00:25:48.414 [2024-07-26 03:54:03.116729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.414 [2024-07-26 03:54:03.116862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.414 [2024-07-26 03:54:03.116884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:48.414 [2024-07-26 03:54:03.116899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:25:48.414 [2024-07-26 03:54:03.116917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.414 [2024-07-26 03:54:03.117033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.414 [2024-07-26 03:54:03.117063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:48.414 [2024-07-26 03:54:03.117084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:25:48.414 [2024-07-26 03:54:03.117101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.414 [2024-07-26 03:54:03.117157] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:48.414 [2024-07-26 03:54:03.121558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.414 [2024-07-26 03:54:03.121600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:48.414 [2024-07-26 03:54:03.121617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.416 ms 00:25:48.414 [2024-07-26 03:54:03.121628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.414 [2024-07-26 03:54:03.121685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.414 [2024-07-26 03:54:03.121703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:48.414 [2024-07-26 03:54:03.121716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:48.414 [2024-07-26 03:54:03.121727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.414 [2024-07-26 03:54:03.121777] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:48.414 [2024-07-26 03:54:03.121810] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:48.414 [2024-07-26 03:54:03.121886] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:48.414 [2024-07-26 03:54:03.121914] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:25:48.414 [2024-07-26 03:54:03.122031] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:48.414 [2024-07-26 03:54:03.122048] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:48.414 [2024-07-26 03:54:03.122062] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:25:48.414 [2024-07-26 03:54:03.122078] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:48.414 [2024-07-26 03:54:03.122091] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:48.414 [2024-07-26 03:54:03.122104] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:48.414 [2024-07-26 03:54:03.122115] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:48.414 [2024-07-26 03:54:03.122126] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:48.414 [2024-07-26 03:54:03.122136] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:48.414 [2024-07-26 03:54:03.122148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.414 [2024-07-26 03:54:03.122164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:48.414 [2024-07-26 03:54:03.122176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:25:48.414 [2024-07-26 03:54:03.122187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.414 [2024-07-26 03:54:03.122310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.414 [2024-07-26 03:54:03.122335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:48.414 [2024-07-26 03:54:03.122349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:25:48.414 [2024-07-26 03:54:03.122361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.414 [2024-07-26 03:54:03.122468] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:48.414 [2024-07-26 03:54:03.122485] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:48.414 [2024-07-26 03:54:03.122503] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:48.414 [2024-07-26 03:54:03.122515] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:48.414 [2024-07-26 03:54:03.122526] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:48.414 [2024-07-26 03:54:03.122537] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:48.414 [2024-07-26 03:54:03.122548] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:48.414 [2024-07-26 03:54:03.122559] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:48.414 [2024-07-26 03:54:03.122569] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:48.414 [2024-07-26 03:54:03.122579] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:48.414 [2024-07-26 03:54:03.122590] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:48.414 [2024-07-26 03:54:03.122600] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:48.414 [2024-07-26 03:54:03.122610] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:48.414 [2024-07-26 03:54:03.122620] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:48.414 [2024-07-26 03:54:03.122631] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:48.414 [2024-07-26 03:54:03.122641] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:48.414 [2024-07-26 03:54:03.122651] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:48.414 [2024-07-26 03:54:03.122679] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:48.414 [2024-07-26 03:54:03.122690] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:48.414 [2024-07-26 03:54:03.122701] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:48.414 [2024-07-26 03:54:03.122726] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:48.414 [2024-07-26 03:54:03.122738] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:48.414 [2024-07-26 03:54:03.122748] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:48.414 [2024-07-26 03:54:03.122759] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:48.414 [2024-07-26 03:54:03.122770] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:48.414 [2024-07-26 03:54:03.122780] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:48.414 [2024-07-26 03:54:03.122790] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:48.414 [2024-07-26 03:54:03.122800] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:48.414 [2024-07-26 03:54:03.122810] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:48.414 [2024-07-26 03:54:03.122838] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:48.414 [2024-07-26 03:54:03.122849] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:48.414 [2024-07-26 03:54:03.122860] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:48.414 [2024-07-26 03:54:03.122871] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:48.414 [2024-07-26 03:54:03.122882] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:48.414 [2024-07-26 03:54:03.122894] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:48.414 [2024-07-26 03:54:03.122909] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:48.414 [2024-07-26 03:54:03.122919] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:48.414 [2024-07-26 03:54:03.122929] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:48.414 [2024-07-26 03:54:03.122940] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:48.414 [2024-07-26 03:54:03.122950] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:48.414 [2024-07-26 03:54:03.122960] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:48.414 [2024-07-26 03:54:03.122971] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:48.414 [2024-07-26 03:54:03.122980] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:48.414 [2024-07-26 03:54:03.122990] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:48.414 [2024-07-26 03:54:03.123001] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:48.414 [2024-07-26 03:54:03.123012] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:48.414 [2024-07-26 03:54:03.123023] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:48.414 [2024-07-26 03:54:03.123034] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:48.414 [2024-07-26 03:54:03.123045] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:48.415 [2024-07-26 03:54:03.123055] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:48.415 [2024-07-26 03:54:03.123065] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:48.415 [2024-07-26 03:54:03.123075] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:48.415 [2024-07-26 03:54:03.123088] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:48.415 [2024-07-26 03:54:03.123101] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:48.415 [2024-07-26 03:54:03.123115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:48.415 [2024-07-26 03:54:03.123128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:48.415 [2024-07-26 03:54:03.123140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:48.415 [2024-07-26 03:54:03.123152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:48.415 [2024-07-26 03:54:03.123163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:48.415 [2024-07-26 03:54:03.123174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:48.415 [2024-07-26 03:54:03.123186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:48.415 [2024-07-26 03:54:03.123197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:48.415 [2024-07-26 03:54:03.123209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:48.415 [2024-07-26 03:54:03.123220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:48.415 [2024-07-26 03:54:03.123231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:48.415 [2024-07-26 03:54:03.123243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:48.415 [2024-07-26 03:54:03.123254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:48.415 [2024-07-26 03:54:03.123265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:48.415 [2024-07-26 03:54:03.123277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:48.415 [2024-07-26 03:54:03.123288] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:48.415 [2024-07-26 03:54:03.123300] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:48.415 [2024-07-26 03:54:03.123318] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:48.415 [2024-07-26 03:54:03.123330] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:48.415 [2024-07-26 03:54:03.123341] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:48.415 [2024-07-26 03:54:03.123352] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:48.415 [2024-07-26 03:54:03.123365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.415 [2024-07-26 03:54:03.123377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:48.415 [2024-07-26 03:54:03.123388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.963 ms 00:25:48.415 [2024-07-26 03:54:03.123399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.415 [2024-07-26 03:54:03.172925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.415 [2024-07-26 03:54:03.173031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:48.415 [2024-07-26 03:54:03.173072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.457 ms 00:25:48.415 [2024-07-26 03:54:03.173097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.415 [2024-07-26 03:54:03.173257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.415 [2024-07-26 03:54:03.173282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:48.415 [2024-07-26 03:54:03.173298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:25:48.415 [2024-07-26 03:54:03.173311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.415 [2024-07-26 03:54:03.223262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.415 [2024-07-26 03:54:03.223336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:48.415 [2024-07-26 03:54:03.223361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.825 ms 00:25:48.415 [2024-07-26 03:54:03.223375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.415 [2024-07-26 03:54:03.223459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.415 [2024-07-26 03:54:03.223479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:48.415 [2024-07-26 03:54:03.223495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:48.415 [2024-07-26 03:54:03.223515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.415 [2024-07-26 03:54:03.223990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.415 [2024-07-26 03:54:03.224014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:48.415 [2024-07-26 03:54:03.224031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.348 ms 00:25:48.415 [2024-07-26 03:54:03.224044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.415 [2024-07-26 03:54:03.224241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.415 [2024-07-26 03:54:03.224263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:48.415 [2024-07-26 03:54:03.224277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:25:48.415 [2024-07-26 03:54:03.224291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.415 [2024-07-26 03:54:03.245592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.415 [2024-07-26 03:54:03.245677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:48.415 [2024-07-26 03:54:03.245702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.259 ms 00:25:48.415 [2024-07-26 03:54:03.245723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.415 [2024-07-26 03:54:03.265875] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:48.415 [2024-07-26 03:54:03.265947] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:48.415 [2024-07-26 03:54:03.265973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.415 [2024-07-26 03:54:03.265988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:48.415 [2024-07-26 03:54:03.266005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.013 ms 00:25:48.415 [2024-07-26 03:54:03.266017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.415 [2024-07-26 03:54:03.303844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.415 [2024-07-26 03:54:03.303948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:48.415 [2024-07-26 03:54:03.303975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.746 ms 00:25:48.415 [2024-07-26 03:54:03.303990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.681 [2024-07-26 03:54:03.324640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.681 [2024-07-26 03:54:03.324742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:48.681 [2024-07-26 03:54:03.324768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.532 ms 00:25:48.681 [2024-07-26 03:54:03.324783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.681 [2024-07-26 03:54:03.347045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.681 [2024-07-26 03:54:03.347127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:48.681 [2024-07-26 03:54:03.347153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.137 ms 00:25:48.681 [2024-07-26 03:54:03.347166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.681 [2024-07-26 03:54:03.348607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.681 [2024-07-26 03:54:03.348680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:48.681 [2024-07-26 03:54:03.348714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.196 ms 00:25:48.681 [2024-07-26 03:54:03.348742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.681 [2024-07-26 03:54:03.444865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.681 [2024-07-26 03:54:03.444976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:48.681 [2024-07-26 03:54:03.445013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.057 ms 00:25:48.681 [2024-07-26 03:54:03.445062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.681 [2024-07-26 03:54:03.465163] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:48.681 [2024-07-26 03:54:03.468887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.681 [2024-07-26 03:54:03.468947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:48.681 [2024-07-26 03:54:03.468980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.712 ms 00:25:48.681 [2024-07-26 03:54:03.469003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.681 [2024-07-26 03:54:03.469220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.681 [2024-07-26 03:54:03.469252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:48.681 [2024-07-26 03:54:03.469278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:48.681 [2024-07-26 03:54:03.469299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.681 [2024-07-26 03:54:03.469457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.681 [2024-07-26 03:54:03.469490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:48.681 [2024-07-26 03:54:03.469514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:48.681 [2024-07-26 03:54:03.469536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.681 [2024-07-26 03:54:03.469595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.681 [2024-07-26 03:54:03.469621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:48.681 [2024-07-26 03:54:03.469643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:48.681 [2024-07-26 03:54:03.469664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.681 [2024-07-26 03:54:03.469726] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:48.681 [2024-07-26 03:54:03.469754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.681 [2024-07-26 03:54:03.469782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:48.681 [2024-07-26 03:54:03.469805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:25:48.681 [2024-07-26 03:54:03.469861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.681 [2024-07-26 03:54:03.518019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.681 [2024-07-26 03:54:03.518097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:48.681 [2024-07-26 03:54:03.518122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.104 ms 00:25:48.681 [2024-07-26 03:54:03.518147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.681 [2024-07-26 03:54:03.518262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.681 [2024-07-26 03:54:03.518285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:48.681 [2024-07-26 03:54:03.518300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:25:48.681 [2024-07-26 03:54:03.518314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.681 [2024-07-26 03:54:03.519725] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 426.656 ms, result 0 00:26:28.829  Copying: 25/1024 [MB] (25 MBps) Copying: 50/1024 [MB] (24 MBps) Copying: 75/1024 [MB] (24 MBps) Copying: 99/1024 [MB] (23 MBps) Copying: 128/1024 [MB] (29 MBps) Copying: 154/1024 [MB] (25 MBps) Copying: 178/1024 [MB] (24 MBps) Copying: 204/1024 [MB] (26 MBps) Copying: 229/1024 [MB] (24 MBps) Copying: 254/1024 [MB] (24 MBps) Copying: 278/1024 [MB] (24 MBps) Copying: 305/1024 [MB] (26 MBps) Copying: 329/1024 [MB] (24 MBps) Copying: 354/1024 [MB] (25 MBps) Copying: 381/1024 [MB] (26 MBps) Copying: 407/1024 [MB] (26 MBps) Copying: 436/1024 [MB] (28 MBps) Copying: 464/1024 [MB] (28 MBps) Copying: 491/1024 [MB] (27 MBps) Copying: 518/1024 [MB] (26 MBps) Copying: 543/1024 [MB] (24 MBps) Copying: 569/1024 [MB] (25 MBps) Copying: 595/1024 [MB] (26 MBps) Copying: 620/1024 [MB] (25 MBps) Copying: 646/1024 [MB] (25 MBps) Copying: 673/1024 [MB] (27 MBps) Copying: 699/1024 [MB] (25 MBps) Copying: 726/1024 [MB] (27 MBps) Copying: 755/1024 [MB] (28 MBps) Copying: 781/1024 [MB] (25 MBps) Copying: 807/1024 [MB] (26 MBps) Copying: 835/1024 [MB] (27 MBps) Copying: 864/1024 [MB] (28 MBps) Copying: 890/1024 [MB] (26 MBps) Copying: 916/1024 [MB] (26 MBps) Copying: 942/1024 [MB] (25 MBps) Copying: 968/1024 [MB] (25 MBps) Copying: 993/1024 [MB] (25 MBps) Copying: 1018/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-07-26 03:54:43.706616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.829 [2024-07-26 03:54:43.706745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:28.829 [2024-07-26 03:54:43.706773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:28.829 [2024-07-26 03:54:43.706788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.829 [2024-07-26 03:54:43.706849] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:28.829 [2024-07-26 03:54:43.710986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.829 [2024-07-26 03:54:43.711035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:28.829 [2024-07-26 03:54:43.711056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.105 ms 00:26:28.829 [2024-07-26 03:54:43.711078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.829 [2024-07-26 03:54:43.711424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.829 [2024-07-26 03:54:43.711457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:28.829 [2024-07-26 03:54:43.711499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:26:28.829 [2024-07-26 03:54:43.711513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.829 [2024-07-26 03:54:43.715940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.829 [2024-07-26 03:54:43.715992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:28.829 [2024-07-26 03:54:43.716011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.400 ms 00:26:28.829 [2024-07-26 03:54:43.716025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.829 [2024-07-26 03:54:43.724294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.829 [2024-07-26 03:54:43.724335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:28.829 [2024-07-26 03:54:43.724354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.227 ms 00:26:28.829 [2024-07-26 03:54:43.724368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.088 [2024-07-26 03:54:43.769284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.088 [2024-07-26 03:54:43.769388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:29.088 [2024-07-26 03:54:43.769415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.803 ms 00:26:29.088 [2024-07-26 03:54:43.769429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.088 [2024-07-26 03:54:43.792725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.088 [2024-07-26 03:54:43.792796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:29.088 [2024-07-26 03:54:43.792843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.242 ms 00:26:29.088 [2024-07-26 03:54:43.792862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.088 [2024-07-26 03:54:43.793098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.088 [2024-07-26 03:54:43.793129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:29.088 [2024-07-26 03:54:43.793154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:26:29.088 [2024-07-26 03:54:43.793176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.088 [2024-07-26 03:54:43.832068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.088 [2024-07-26 03:54:43.832126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:29.088 [2024-07-26 03:54:43.832149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.865 ms 00:26:29.088 [2024-07-26 03:54:43.832163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.088 [2024-07-26 03:54:43.869827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.088 [2024-07-26 03:54:43.869882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:29.088 [2024-07-26 03:54:43.869904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.629 ms 00:26:29.088 [2024-07-26 03:54:43.869917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.088 [2024-07-26 03:54:43.907272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.088 [2024-07-26 03:54:43.907332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:29.088 [2024-07-26 03:54:43.907372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.321 ms 00:26:29.088 [2024-07-26 03:54:43.907386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.088 [2024-07-26 03:54:43.944688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.088 [2024-07-26 03:54:43.944747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:29.088 [2024-07-26 03:54:43.944768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.209 ms 00:26:29.088 [2024-07-26 03:54:43.944782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.088 [2024-07-26 03:54:43.944856] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:29.088 [2024-07-26 03:54:43.944900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:29.088 [2024-07-26 03:54:43.944924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:29.088 [2024-07-26 03:54:43.944957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:29.088 [2024-07-26 03:54:43.944990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.945991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.946005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.946019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.946033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.946047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.946062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.946076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.946092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.946106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.946120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.946134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.946148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.946162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.946176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.946189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.946203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.946218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.946234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.946248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.946262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.946276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.946290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.946305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.946319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:29.089 [2024-07-26 03:54:43.946333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:29.090 [2024-07-26 03:54:43.946347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:29.090 [2024-07-26 03:54:43.946361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:29.090 [2024-07-26 03:54:43.946375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:29.090 [2024-07-26 03:54:43.946389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:29.090 [2024-07-26 03:54:43.946403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:29.090 [2024-07-26 03:54:43.946417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:29.090 [2024-07-26 03:54:43.946431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:29.090 [2024-07-26 03:54:43.946445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:29.090 [2024-07-26 03:54:43.946459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:29.090 [2024-07-26 03:54:43.946472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:29.090 [2024-07-26 03:54:43.946486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:29.090 [2024-07-26 03:54:43.946501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:29.090 [2024-07-26 03:54:43.946516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:29.090 [2024-07-26 03:54:43.946530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:29.090 [2024-07-26 03:54:43.946544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:29.090 [2024-07-26 03:54:43.946558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:29.090 [2024-07-26 03:54:43.946573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:29.090 [2024-07-26 03:54:43.946587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:29.090 [2024-07-26 03:54:43.946601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:29.090 [2024-07-26 03:54:43.946615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:29.090 [2024-07-26 03:54:43.946629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:29.090 [2024-07-26 03:54:43.946643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:29.090 [2024-07-26 03:54:43.946657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:29.090 [2024-07-26 03:54:43.946671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:29.090 [2024-07-26 03:54:43.946685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:29.090 [2024-07-26 03:54:43.946725] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:29.090 [2024-07-26 03:54:43.946744] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 67f15440-6dde-4ab9-b185-89b32e8c0bb4 00:26:29.090 [2024-07-26 03:54:43.946768] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:29.090 [2024-07-26 03:54:43.946782] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:29.090 [2024-07-26 03:54:43.946794] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:29.090 [2024-07-26 03:54:43.946808] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:29.090 [2024-07-26 03:54:43.947095] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:29.090 [2024-07-26 03:54:43.947160] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:29.090 [2024-07-26 03:54:43.947208] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:29.090 [2024-07-26 03:54:43.947340] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:29.090 [2024-07-26 03:54:43.947398] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:29.090 [2024-07-26 03:54:43.947444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.090 [2024-07-26 03:54:43.947490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:29.090 [2024-07-26 03:54:43.947663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.630 ms 00:26:29.090 [2024-07-26 03:54:43.947712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.090 [2024-07-26 03:54:43.967961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.090 [2024-07-26 03:54:43.968148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:29.090 [2024-07-26 03:54:43.968301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.076 ms 00:26:29.090 [2024-07-26 03:54:43.968362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.090 [2024-07-26 03:54:43.969020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.090 [2024-07-26 03:54:43.969182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:29.090 [2024-07-26 03:54:43.969316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.519 ms 00:26:29.090 [2024-07-26 03:54:43.969342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.349 [2024-07-26 03:54:44.013771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.349 [2024-07-26 03:54:44.013863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:29.349 [2024-07-26 03:54:44.013894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.349 [2024-07-26 03:54:44.013907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.349 [2024-07-26 03:54:44.014070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.349 [2024-07-26 03:54:44.014107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:29.349 [2024-07-26 03:54:44.014134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.349 [2024-07-26 03:54:44.014156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.349 [2024-07-26 03:54:44.014317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.349 [2024-07-26 03:54:44.014360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:29.349 [2024-07-26 03:54:44.014391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.349 [2024-07-26 03:54:44.014415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.349 [2024-07-26 03:54:44.014445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.349 [2024-07-26 03:54:44.014459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:29.350 [2024-07-26 03:54:44.014471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.350 [2024-07-26 03:54:44.014481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.350 [2024-07-26 03:54:44.114510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.350 [2024-07-26 03:54:44.114612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:29.350 [2024-07-26 03:54:44.114661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.350 [2024-07-26 03:54:44.114685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.350 [2024-07-26 03:54:44.199521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.350 [2024-07-26 03:54:44.199592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:29.350 [2024-07-26 03:54:44.199613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.350 [2024-07-26 03:54:44.199625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.350 [2024-07-26 03:54:44.199744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.350 [2024-07-26 03:54:44.199764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:29.350 [2024-07-26 03:54:44.199777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.350 [2024-07-26 03:54:44.199788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.350 [2024-07-26 03:54:44.199867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.350 [2024-07-26 03:54:44.199888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:29.350 [2024-07-26 03:54:44.199901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.350 [2024-07-26 03:54:44.199913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.350 [2024-07-26 03:54:44.200045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.350 [2024-07-26 03:54:44.200071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:29.350 [2024-07-26 03:54:44.200084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.350 [2024-07-26 03:54:44.200095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.350 [2024-07-26 03:54:44.200143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.350 [2024-07-26 03:54:44.200161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:29.350 [2024-07-26 03:54:44.200173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.350 [2024-07-26 03:54:44.200185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.350 [2024-07-26 03:54:44.200230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.350 [2024-07-26 03:54:44.200251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:29.350 [2024-07-26 03:54:44.200264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.350 [2024-07-26 03:54:44.200275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.350 [2024-07-26 03:54:44.200349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.350 [2024-07-26 03:54:44.200383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:29.350 [2024-07-26 03:54:44.200400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.350 [2024-07-26 03:54:44.200412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.350 [2024-07-26 03:54:44.200623] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 493.941 ms, result 0 00:26:30.727 00:26:30.727 00:26:30.727 03:54:45 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:33.262 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:33.262 03:54:47 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:26:33.262 [2024-07-26 03:54:47.769671] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:26:33.262 [2024-07-26 03:54:47.769861] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82958 ] 00:26:33.262 [2024-07-26 03:54:47.941634] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.521 [2024-07-26 03:54:48.168776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.780 [2024-07-26 03:54:48.494164] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:33.780 [2024-07-26 03:54:48.494245] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:33.780 [2024-07-26 03:54:48.654103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.780 [2024-07-26 03:54:48.654176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:33.780 [2024-07-26 03:54:48.654198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:33.780 [2024-07-26 03:54:48.654211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.780 [2024-07-26 03:54:48.654282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.780 [2024-07-26 03:54:48.654302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:33.780 [2024-07-26 03:54:48.654315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:26:33.780 [2024-07-26 03:54:48.654330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.780 [2024-07-26 03:54:48.654366] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:33.780 [2024-07-26 03:54:48.655343] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:33.780 [2024-07-26 03:54:48.655390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.780 [2024-07-26 03:54:48.655405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:33.780 [2024-07-26 03:54:48.655418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.034 ms 00:26:33.780 [2024-07-26 03:54:48.655430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.780 [2024-07-26 03:54:48.656557] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:33.780 [2024-07-26 03:54:48.672832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.780 [2024-07-26 03:54:48.672893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:33.780 [2024-07-26 03:54:48.672913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.274 ms 00:26:33.780 [2024-07-26 03:54:48.672927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.780 [2024-07-26 03:54:48.673011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.780 [2024-07-26 03:54:48.673034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:33.780 [2024-07-26 03:54:48.673048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:26:33.780 [2024-07-26 03:54:48.673059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.780 [2024-07-26 03:54:48.677469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.780 [2024-07-26 03:54:48.677523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:33.780 [2024-07-26 03:54:48.677540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.302 ms 00:26:33.780 [2024-07-26 03:54:48.677553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.780 [2024-07-26 03:54:48.677694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.780 [2024-07-26 03:54:48.677719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:33.780 [2024-07-26 03:54:48.677733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:26:33.780 [2024-07-26 03:54:48.677744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.780 [2024-07-26 03:54:48.677848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.780 [2024-07-26 03:54:48.677868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:33.780 [2024-07-26 03:54:48.677883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:26:33.780 [2024-07-26 03:54:48.677903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.780 [2024-07-26 03:54:48.677941] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:33.780 [2024-07-26 03:54:48.682236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.780 [2024-07-26 03:54:48.682276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:33.780 [2024-07-26 03:54:48.682293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.305 ms 00:26:33.780 [2024-07-26 03:54:48.682304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.780 [2024-07-26 03:54:48.682357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.780 [2024-07-26 03:54:48.682374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:33.780 [2024-07-26 03:54:48.682388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:33.780 [2024-07-26 03:54:48.682400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.780 [2024-07-26 03:54:48.682447] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:33.780 [2024-07-26 03:54:48.682478] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:33.780 [2024-07-26 03:54:48.682523] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:33.780 [2024-07-26 03:54:48.682548] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:26:33.781 [2024-07-26 03:54:48.682662] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:33.781 [2024-07-26 03:54:48.682677] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:33.781 [2024-07-26 03:54:48.682692] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:26:33.781 [2024-07-26 03:54:48.682718] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:33.781 [2024-07-26 03:54:48.682734] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:33.781 [2024-07-26 03:54:48.682747] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:33.781 [2024-07-26 03:54:48.682758] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:33.781 [2024-07-26 03:54:48.682769] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:33.781 [2024-07-26 03:54:48.682780] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:33.781 [2024-07-26 03:54:48.682793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.781 [2024-07-26 03:54:48.682809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:33.781 [2024-07-26 03:54:48.682857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:26:33.781 [2024-07-26 03:54:48.682871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.781 [2024-07-26 03:54:48.682968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.781 [2024-07-26 03:54:48.682985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:33.781 [2024-07-26 03:54:48.682997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:26:33.781 [2024-07-26 03:54:48.683009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.781 [2024-07-26 03:54:48.683143] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:33.781 [2024-07-26 03:54:48.683162] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:33.781 [2024-07-26 03:54:48.683190] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:33.781 [2024-07-26 03:54:48.683202] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:33.781 [2024-07-26 03:54:48.683214] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:33.781 [2024-07-26 03:54:48.683225] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:33.781 [2024-07-26 03:54:48.683236] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:33.781 [2024-07-26 03:54:48.683247] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:33.781 [2024-07-26 03:54:48.683258] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:33.781 [2024-07-26 03:54:48.683269] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:33.781 [2024-07-26 03:54:48.683280] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:33.781 [2024-07-26 03:54:48.683291] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:33.781 [2024-07-26 03:54:48.683302] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:33.781 [2024-07-26 03:54:48.683313] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:33.781 [2024-07-26 03:54:48.683324] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:33.781 [2024-07-26 03:54:48.683335] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:33.781 [2024-07-26 03:54:48.683346] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:33.781 [2024-07-26 03:54:48.683357] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:33.781 [2024-07-26 03:54:48.683367] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:33.781 [2024-07-26 03:54:48.683378] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:33.781 [2024-07-26 03:54:48.683402] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:33.781 [2024-07-26 03:54:48.683413] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:33.781 [2024-07-26 03:54:48.683423] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:33.781 [2024-07-26 03:54:48.683434] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:33.781 [2024-07-26 03:54:48.683444] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:33.781 [2024-07-26 03:54:48.683455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:33.781 [2024-07-26 03:54:48.683465] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:33.781 [2024-07-26 03:54:48.683475] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:33.781 [2024-07-26 03:54:48.683486] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:33.781 [2024-07-26 03:54:48.683496] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:33.781 [2024-07-26 03:54:48.683506] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:33.781 [2024-07-26 03:54:48.683516] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:33.781 [2024-07-26 03:54:48.683528] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:33.781 [2024-07-26 03:54:48.683538] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:33.781 [2024-07-26 03:54:48.683548] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:33.781 [2024-07-26 03:54:48.683559] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:33.781 [2024-07-26 03:54:48.683569] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:34.067 [2024-07-26 03:54:48.683579] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:34.067 [2024-07-26 03:54:48.683590] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:34.067 [2024-07-26 03:54:48.683601] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:34.067 [2024-07-26 03:54:48.683611] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:34.067 [2024-07-26 03:54:48.683622] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:34.067 [2024-07-26 03:54:48.683632] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:34.067 [2024-07-26 03:54:48.683642] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:34.067 [2024-07-26 03:54:48.683654] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:34.067 [2024-07-26 03:54:48.683665] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:34.067 [2024-07-26 03:54:48.683676] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:34.067 [2024-07-26 03:54:48.683688] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:34.067 [2024-07-26 03:54:48.683700] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:34.067 [2024-07-26 03:54:48.683721] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:34.067 [2024-07-26 03:54:48.683732] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:34.067 [2024-07-26 03:54:48.683743] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:34.067 [2024-07-26 03:54:48.683753] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:34.067 [2024-07-26 03:54:48.683765] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:34.067 [2024-07-26 03:54:48.683780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:34.067 [2024-07-26 03:54:48.683794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:34.067 [2024-07-26 03:54:48.683806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:34.067 [2024-07-26 03:54:48.683834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:34.067 [2024-07-26 03:54:48.683849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:34.067 [2024-07-26 03:54:48.683861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:34.067 [2024-07-26 03:54:48.683873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:34.067 [2024-07-26 03:54:48.683885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:34.067 [2024-07-26 03:54:48.683897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:34.067 [2024-07-26 03:54:48.683908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:34.067 [2024-07-26 03:54:48.683920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:34.067 [2024-07-26 03:54:48.683931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:34.067 [2024-07-26 03:54:48.683943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:34.067 [2024-07-26 03:54:48.683955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:34.067 [2024-07-26 03:54:48.683967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:34.067 [2024-07-26 03:54:48.683978] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:34.067 [2024-07-26 03:54:48.683991] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:34.067 [2024-07-26 03:54:48.684010] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:34.067 [2024-07-26 03:54:48.684023] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:34.067 [2024-07-26 03:54:48.684035] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:34.067 [2024-07-26 03:54:48.684047] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:34.067 [2024-07-26 03:54:48.684059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.067 [2024-07-26 03:54:48.684072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:34.067 [2024-07-26 03:54:48.684084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.983 ms 00:26:34.067 [2024-07-26 03:54:48.684096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.067 [2024-07-26 03:54:48.727444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.067 [2024-07-26 03:54:48.727502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:34.067 [2024-07-26 03:54:48.727522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.283 ms 00:26:34.067 [2024-07-26 03:54:48.727535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.067 [2024-07-26 03:54:48.727658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.067 [2024-07-26 03:54:48.727675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:34.068 [2024-07-26 03:54:48.727688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:26:34.068 [2024-07-26 03:54:48.727700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.068 [2024-07-26 03:54:48.766367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.068 [2024-07-26 03:54:48.766431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:34.068 [2024-07-26 03:54:48.766452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.570 ms 00:26:34.068 [2024-07-26 03:54:48.766465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.068 [2024-07-26 03:54:48.766540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.068 [2024-07-26 03:54:48.766557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:34.068 [2024-07-26 03:54:48.766571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:34.068 [2024-07-26 03:54:48.766588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.068 [2024-07-26 03:54:48.767026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.068 [2024-07-26 03:54:48.767047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:34.068 [2024-07-26 03:54:48.767061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:26:34.068 [2024-07-26 03:54:48.767073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.068 [2024-07-26 03:54:48.767249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.068 [2024-07-26 03:54:48.767277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:34.068 [2024-07-26 03:54:48.767290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:26:34.068 [2024-07-26 03:54:48.767302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.068 [2024-07-26 03:54:48.783395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.068 [2024-07-26 03:54:48.783447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:34.068 [2024-07-26 03:54:48.783466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.057 ms 00:26:34.068 [2024-07-26 03:54:48.783485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.068 [2024-07-26 03:54:48.799854] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:34.068 [2024-07-26 03:54:48.799902] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:34.068 [2024-07-26 03:54:48.799923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.068 [2024-07-26 03:54:48.799935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:34.068 [2024-07-26 03:54:48.799949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.286 ms 00:26:34.068 [2024-07-26 03:54:48.799961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.068 [2024-07-26 03:54:48.829949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.068 [2024-07-26 03:54:48.830024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:34.068 [2024-07-26 03:54:48.830045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.936 ms 00:26:34.068 [2024-07-26 03:54:48.830058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.068 [2024-07-26 03:54:48.845965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.068 [2024-07-26 03:54:48.846016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:34.068 [2024-07-26 03:54:48.846034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.825 ms 00:26:34.068 [2024-07-26 03:54:48.846047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.068 [2024-07-26 03:54:48.861633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.068 [2024-07-26 03:54:48.861691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:34.068 [2024-07-26 03:54:48.861709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.527 ms 00:26:34.068 [2024-07-26 03:54:48.861722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.068 [2024-07-26 03:54:48.862595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.068 [2024-07-26 03:54:48.862635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:34.068 [2024-07-26 03:54:48.862652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.687 ms 00:26:34.068 [2024-07-26 03:54:48.862664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.068 [2024-07-26 03:54:48.936244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.068 [2024-07-26 03:54:48.936319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:34.068 [2024-07-26 03:54:48.936342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.540 ms 00:26:34.068 [2024-07-26 03:54:48.936363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.068 [2024-07-26 03:54:48.949304] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:34.068 [2024-07-26 03:54:48.951997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.068 [2024-07-26 03:54:48.952037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:34.068 [2024-07-26 03:54:48.952056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.555 ms 00:26:34.068 [2024-07-26 03:54:48.952068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.068 [2024-07-26 03:54:48.952192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.068 [2024-07-26 03:54:48.952213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:34.068 [2024-07-26 03:54:48.952227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:34.068 [2024-07-26 03:54:48.952253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.068 [2024-07-26 03:54:48.952369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.068 [2024-07-26 03:54:48.952396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:34.068 [2024-07-26 03:54:48.952419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:26:34.068 [2024-07-26 03:54:48.952432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.068 [2024-07-26 03:54:48.952468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.068 [2024-07-26 03:54:48.952485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:34.068 [2024-07-26 03:54:48.952497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:34.068 [2024-07-26 03:54:48.952509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.068 [2024-07-26 03:54:48.952551] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:34.068 [2024-07-26 03:54:48.952567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.068 [2024-07-26 03:54:48.952584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:34.068 [2024-07-26 03:54:48.952596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:26:34.068 [2024-07-26 03:54:48.952608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.327 [2024-07-26 03:54:48.983894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.327 [2024-07-26 03:54:48.983955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:34.327 [2024-07-26 03:54:48.983977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.259 ms 00:26:34.327 [2024-07-26 03:54:48.983997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.327 [2024-07-26 03:54:48.984109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.327 [2024-07-26 03:54:48.984130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:34.327 [2024-07-26 03:54:48.984143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:26:34.327 [2024-07-26 03:54:48.984155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.327 [2024-07-26 03:54:48.985433] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 330.832 ms, result 0 00:27:10.934  Copying: 30/1024 [MB] (30 MBps) Copying: 60/1024 [MB] (30 MBps) Copying: 90/1024 [MB] (29 MBps) Copying: 119/1024 [MB] (28 MBps) Copying: 148/1024 [MB] (29 MBps) Copying: 178/1024 [MB] (29 MBps) Copying: 207/1024 [MB] (29 MBps) Copying: 231/1024 [MB] (24 MBps) Copying: 258/1024 [MB] (26 MBps) Copying: 287/1024 [MB] (29 MBps) Copying: 318/1024 [MB] (31 MBps) Copying: 348/1024 [MB] (30 MBps) Copying: 380/1024 [MB] (31 MBps) Copying: 407/1024 [MB] (27 MBps) Copying: 436/1024 [MB] (28 MBps) Copying: 464/1024 [MB] (28 MBps) Copying: 490/1024 [MB] (25 MBps) Copying: 517/1024 [MB] (26 MBps) Copying: 545/1024 [MB] (28 MBps) Copying: 573/1024 [MB] (27 MBps) Copying: 603/1024 [MB] (30 MBps) Copying: 634/1024 [MB] (30 MBps) Copying: 666/1024 [MB] (31 MBps) Copying: 695/1024 [MB] (29 MBps) Copying: 724/1024 [MB] (28 MBps) Copying: 754/1024 [MB] (29 MBps) Copying: 786/1024 [MB] (31 MBps) Copying: 816/1024 [MB] (30 MBps) Copying: 845/1024 [MB] (29 MBps) Copying: 874/1024 [MB] (28 MBps) Copying: 900/1024 [MB] (26 MBps) Copying: 928/1024 [MB] (27 MBps) Copying: 955/1024 [MB] (27 MBps) Copying: 985/1024 [MB] (30 MBps) Copying: 1014/1024 [MB] (28 MBps) Copying: 1048172/1048576 [kB] (9300 kBps) Copying: 1024/1024 [MB] (average 28 MBps)[2024-07-26 03:55:25.546768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.934 [2024-07-26 03:55:25.546882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:10.934 [2024-07-26 03:55:25.546908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:27:10.934 [2024-07-26 03:55:25.546922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.934 [2024-07-26 03:55:25.550799] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:10.934 [2024-07-26 03:55:25.557598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.934 [2024-07-26 03:55:25.557689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:10.934 [2024-07-26 03:55:25.557722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.688 ms 00:27:10.934 [2024-07-26 03:55:25.557742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.934 [2024-07-26 03:55:25.570225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.934 [2024-07-26 03:55:25.570324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:10.934 [2024-07-26 03:55:25.570357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.049 ms 00:27:10.934 [2024-07-26 03:55:25.570378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.934 [2024-07-26 03:55:25.591307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.934 [2024-07-26 03:55:25.591422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:10.934 [2024-07-26 03:55:25.591458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.867 ms 00:27:10.934 [2024-07-26 03:55:25.591480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.934 [2024-07-26 03:55:25.598355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.934 [2024-07-26 03:55:25.598456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:10.934 [2024-07-26 03:55:25.598485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.807 ms 00:27:10.934 [2024-07-26 03:55:25.598504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.934 [2024-07-26 03:55:25.631025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.934 [2024-07-26 03:55:25.631113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:10.934 [2024-07-26 03:55:25.631144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.413 ms 00:27:10.934 [2024-07-26 03:55:25.631164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.934 [2024-07-26 03:55:25.649754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.934 [2024-07-26 03:55:25.649876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:10.934 [2024-07-26 03:55:25.649910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.486 ms 00:27:10.934 [2024-07-26 03:55:25.649931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.934 [2024-07-26 03:55:25.733783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.934 [2024-07-26 03:55:25.733959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:10.934 [2024-07-26 03:55:25.734001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.732 ms 00:27:10.934 [2024-07-26 03:55:25.734025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.934 [2024-07-26 03:55:25.792721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.934 [2024-07-26 03:55:25.792939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:27:10.934 [2024-07-26 03:55:25.792981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.641 ms 00:27:10.934 [2024-07-26 03:55:25.793008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.934 [2024-07-26 03:55:25.835572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.934 [2024-07-26 03:55:25.835668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:27:10.934 [2024-07-26 03:55:25.835695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.439 ms 00:27:10.935 [2024-07-26 03:55:25.835709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.194 [2024-07-26 03:55:25.874425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.194 [2024-07-26 03:55:25.874515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:11.194 [2024-07-26 03:55:25.874565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.618 ms 00:27:11.194 [2024-07-26 03:55:25.874580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.194 [2024-07-26 03:55:25.913516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.194 [2024-07-26 03:55:25.913615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:11.194 [2024-07-26 03:55:25.913641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.767 ms 00:27:11.194 [2024-07-26 03:55:25.913656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.194 [2024-07-26 03:55:25.913743] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:11.194 [2024-07-26 03:55:25.913772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 125440 / 261120 wr_cnt: 1 state: open 00:27:11.194 [2024-07-26 03:55:25.913792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.913807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.913854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.913872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.913887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.913913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.913949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.913978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.914002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.914026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.914048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.914070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.914095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.914119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.914142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.914167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.914192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.914218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.914244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.914270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.914297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.914322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.914347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.914372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.914400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.914426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.914454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.914479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.914505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.914532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.914557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.914582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.914609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.914637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.914662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.914688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.914716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.914758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.914789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.914836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.914870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.914895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.914920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.914945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.914970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.914991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.915011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.915030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:11.194 [2024-07-26 03:55:25.915055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.915080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.915108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.915135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.915163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.915190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.915215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.915241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.915267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.915295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.915322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.915350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.915377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.915399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.915418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.915439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.915466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.915493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.915518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.915542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.915570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.915597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.915626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.915648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.915667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.915688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.915713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.915736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.915763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.915790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.915851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.915870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.915885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.915900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.915915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.915929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.915944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.915958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.915973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.915988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.916003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.916017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.916032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.916046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.916061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.916077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.916091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.916106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.916128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.916143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.916157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:11.195 [2024-07-26 03:55:25.916184] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:11.195 [2024-07-26 03:55:25.916199] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 67f15440-6dde-4ab9-b185-89b32e8c0bb4 00:27:11.195 [2024-07-26 03:55:25.916214] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 125440 00:27:11.195 [2024-07-26 03:55:25.916228] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 126400 00:27:11.195 [2024-07-26 03:55:25.916242] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 125440 00:27:11.195 [2024-07-26 03:55:25.916270] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0077 00:27:11.195 [2024-07-26 03:55:25.916284] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:11.195 [2024-07-26 03:55:25.916298] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:11.195 [2024-07-26 03:55:25.916316] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:11.195 [2024-07-26 03:55:25.916329] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:11.195 [2024-07-26 03:55:25.916342] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:11.195 [2024-07-26 03:55:25.916358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.195 [2024-07-26 03:55:25.916372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:11.195 [2024-07-26 03:55:25.916388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.617 ms 00:27:11.195 [2024-07-26 03:55:25.916402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.195 [2024-07-26 03:55:25.937402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.195 [2024-07-26 03:55:25.937492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:11.195 [2024-07-26 03:55:25.937541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.913 ms 00:27:11.195 [2024-07-26 03:55:25.937556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.195 [2024-07-26 03:55:25.938248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.195 [2024-07-26 03:55:25.938296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:11.195 [2024-07-26 03:55:25.938317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.613 ms 00:27:11.195 [2024-07-26 03:55:25.938332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.195 [2024-07-26 03:55:25.981519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.195 [2024-07-26 03:55:25.981606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:11.195 [2024-07-26 03:55:25.981634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.195 [2024-07-26 03:55:25.981647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.195 [2024-07-26 03:55:25.981772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.195 [2024-07-26 03:55:25.981798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:11.195 [2024-07-26 03:55:25.981812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.195 [2024-07-26 03:55:25.981859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.195 [2024-07-26 03:55:25.982003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.195 [2024-07-26 03:55:25.982032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:11.195 [2024-07-26 03:55:25.982055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.195 [2024-07-26 03:55:25.982086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.195 [2024-07-26 03:55:25.982126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.195 [2024-07-26 03:55:25.982150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:11.195 [2024-07-26 03:55:25.982173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.195 [2024-07-26 03:55:25.982194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.195 [2024-07-26 03:55:26.084666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.195 [2024-07-26 03:55:26.084749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:11.195 [2024-07-26 03:55:26.084771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.195 [2024-07-26 03:55:26.084797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.454 [2024-07-26 03:55:26.172448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.454 [2024-07-26 03:55:26.172541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:11.454 [2024-07-26 03:55:26.172565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.454 [2024-07-26 03:55:26.172578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.454 [2024-07-26 03:55:26.172701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.454 [2024-07-26 03:55:26.172721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:11.454 [2024-07-26 03:55:26.172735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.454 [2024-07-26 03:55:26.172746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.454 [2024-07-26 03:55:26.172809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.454 [2024-07-26 03:55:26.172879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:11.454 [2024-07-26 03:55:26.172906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.454 [2024-07-26 03:55:26.172926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.454 [2024-07-26 03:55:26.173058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.454 [2024-07-26 03:55:26.173077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:11.454 [2024-07-26 03:55:26.173090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.454 [2024-07-26 03:55:26.173102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.454 [2024-07-26 03:55:26.173149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.454 [2024-07-26 03:55:26.173174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:11.454 [2024-07-26 03:55:26.173186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.454 [2024-07-26 03:55:26.173198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.454 [2024-07-26 03:55:26.173247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.454 [2024-07-26 03:55:26.173270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:11.454 [2024-07-26 03:55:26.173282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.454 [2024-07-26 03:55:26.173295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.454 [2024-07-26 03:55:26.173354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.454 [2024-07-26 03:55:26.173372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:11.454 [2024-07-26 03:55:26.173385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.454 [2024-07-26 03:55:26.173396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.455 [2024-07-26 03:55:26.173536] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 629.086 ms, result 0 00:27:13.357 00:27:13.357 00:27:13.357 03:55:27 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:27:13.357 [2024-07-26 03:55:27.973322] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:27:13.357 [2024-07-26 03:55:27.973553] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83352 ] 00:27:13.357 [2024-07-26 03:55:28.158153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.615 [2024-07-26 03:55:28.348927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.873 [2024-07-26 03:55:28.664868] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:13.873 [2024-07-26 03:55:28.664968] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:14.133 [2024-07-26 03:55:28.827763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.133 [2024-07-26 03:55:28.827881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:14.133 [2024-07-26 03:55:28.827907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:14.133 [2024-07-26 03:55:28.827919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.133 [2024-07-26 03:55:28.828031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.133 [2024-07-26 03:55:28.828062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:14.133 [2024-07-26 03:55:28.828086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:27:14.133 [2024-07-26 03:55:28.828114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.133 [2024-07-26 03:55:28.828174] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:14.133 [2024-07-26 03:55:28.829417] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:14.133 [2024-07-26 03:55:28.829481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.133 [2024-07-26 03:55:28.829505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:14.133 [2024-07-26 03:55:28.829524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.324 ms 00:27:14.133 [2024-07-26 03:55:28.829542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.133 [2024-07-26 03:55:28.830926] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:14.133 [2024-07-26 03:55:28.850216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.133 [2024-07-26 03:55:28.850310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:14.133 [2024-07-26 03:55:28.850333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.285 ms 00:27:14.133 [2024-07-26 03:55:28.850346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.133 [2024-07-26 03:55:28.850489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.133 [2024-07-26 03:55:28.850514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:14.133 [2024-07-26 03:55:28.850531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:27:14.133 [2024-07-26 03:55:28.850543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.133 [2024-07-26 03:55:28.855715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.133 [2024-07-26 03:55:28.855845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:14.133 [2024-07-26 03:55:28.855878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.024 ms 00:27:14.133 [2024-07-26 03:55:28.855897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.133 [2024-07-26 03:55:28.856067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.133 [2024-07-26 03:55:28.856095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:14.133 [2024-07-26 03:55:28.856115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:27:14.133 [2024-07-26 03:55:28.856131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.133 [2024-07-26 03:55:28.856240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.133 [2024-07-26 03:55:28.856272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:14.133 [2024-07-26 03:55:28.856291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:27:14.133 [2024-07-26 03:55:28.856309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.133 [2024-07-26 03:55:28.856364] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:14.133 [2024-07-26 03:55:28.860854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.133 [2024-07-26 03:55:28.860917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:14.133 [2024-07-26 03:55:28.860935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.502 ms 00:27:14.133 [2024-07-26 03:55:28.860950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.133 [2024-07-26 03:55:28.861025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.133 [2024-07-26 03:55:28.861043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:14.133 [2024-07-26 03:55:28.861057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:14.133 [2024-07-26 03:55:28.861072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.133 [2024-07-26 03:55:28.861161] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:14.133 [2024-07-26 03:55:28.861196] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:14.133 [2024-07-26 03:55:28.861247] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:14.133 [2024-07-26 03:55:28.861272] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:27:14.133 [2024-07-26 03:55:28.861383] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:14.133 [2024-07-26 03:55:28.861402] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:14.133 [2024-07-26 03:55:28.861420] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:27:14.133 [2024-07-26 03:55:28.861435] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:14.133 [2024-07-26 03:55:28.861449] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:14.133 [2024-07-26 03:55:28.861461] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:14.133 [2024-07-26 03:55:28.861472] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:14.133 [2024-07-26 03:55:28.861483] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:14.133 [2024-07-26 03:55:28.861494] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:14.133 [2024-07-26 03:55:28.861520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.133 [2024-07-26 03:55:28.861537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:14.133 [2024-07-26 03:55:28.861549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:27:14.133 [2024-07-26 03:55:28.861560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.133 [2024-07-26 03:55:28.861661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.133 [2024-07-26 03:55:28.861678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:14.133 [2024-07-26 03:55:28.861690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:27:14.133 [2024-07-26 03:55:28.861703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.133 [2024-07-26 03:55:28.861844] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:14.133 [2024-07-26 03:55:28.861865] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:14.133 [2024-07-26 03:55:28.861884] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:14.133 [2024-07-26 03:55:28.861900] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:14.133 [2024-07-26 03:55:28.861911] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:14.133 [2024-07-26 03:55:28.861922] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:14.133 [2024-07-26 03:55:28.861932] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:14.133 [2024-07-26 03:55:28.861946] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:14.133 [2024-07-26 03:55:28.861957] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:14.133 [2024-07-26 03:55:28.861967] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:14.133 [2024-07-26 03:55:28.861977] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:14.133 [2024-07-26 03:55:28.861987] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:14.133 [2024-07-26 03:55:28.862000] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:14.133 [2024-07-26 03:55:28.862011] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:14.133 [2024-07-26 03:55:28.862021] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:14.133 [2024-07-26 03:55:28.862031] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:14.133 [2024-07-26 03:55:28.862042] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:14.134 [2024-07-26 03:55:28.862052] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:14.134 [2024-07-26 03:55:28.862062] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:14.134 [2024-07-26 03:55:28.862073] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:14.134 [2024-07-26 03:55:28.862100] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:14.134 [2024-07-26 03:55:28.862111] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:14.134 [2024-07-26 03:55:28.862121] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:14.134 [2024-07-26 03:55:28.862131] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:14.134 [2024-07-26 03:55:28.862142] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:14.134 [2024-07-26 03:55:28.862152] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:14.134 [2024-07-26 03:55:28.862162] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:14.134 [2024-07-26 03:55:28.862178] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:14.134 [2024-07-26 03:55:28.862193] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:14.134 [2024-07-26 03:55:28.862211] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:14.134 [2024-07-26 03:55:28.862226] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:14.134 [2024-07-26 03:55:28.862242] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:14.134 [2024-07-26 03:55:28.862258] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:14.134 [2024-07-26 03:55:28.862276] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:14.134 [2024-07-26 03:55:28.862292] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:14.134 [2024-07-26 03:55:28.862308] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:14.134 [2024-07-26 03:55:28.862322] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:14.134 [2024-07-26 03:55:28.862338] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:14.134 [2024-07-26 03:55:28.862353] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:14.134 [2024-07-26 03:55:28.862368] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:14.134 [2024-07-26 03:55:28.862384] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:14.134 [2024-07-26 03:55:28.862400] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:14.134 [2024-07-26 03:55:28.862415] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:14.134 [2024-07-26 03:55:28.862429] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:14.134 [2024-07-26 03:55:28.862446] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:14.134 [2024-07-26 03:55:28.862463] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:14.134 [2024-07-26 03:55:28.862480] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:14.134 [2024-07-26 03:55:28.862497] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:14.134 [2024-07-26 03:55:28.862513] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:14.134 [2024-07-26 03:55:28.862528] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:14.134 [2024-07-26 03:55:28.862545] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:14.134 [2024-07-26 03:55:28.862560] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:14.134 [2024-07-26 03:55:28.862576] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:14.134 [2024-07-26 03:55:28.862594] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:14.134 [2024-07-26 03:55:28.862614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:14.134 [2024-07-26 03:55:28.862632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:14.134 [2024-07-26 03:55:28.862649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:14.134 [2024-07-26 03:55:28.862666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:14.134 [2024-07-26 03:55:28.862682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:14.134 [2024-07-26 03:55:28.862700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:14.134 [2024-07-26 03:55:28.862717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:14.134 [2024-07-26 03:55:28.862734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:14.134 [2024-07-26 03:55:28.862766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:14.134 [2024-07-26 03:55:28.862784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:14.134 [2024-07-26 03:55:28.862801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:14.134 [2024-07-26 03:55:28.862836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:14.134 [2024-07-26 03:55:28.862856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:14.134 [2024-07-26 03:55:28.862874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:14.134 [2024-07-26 03:55:28.862891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:14.134 [2024-07-26 03:55:28.862908] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:14.134 [2024-07-26 03:55:28.862927] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:14.134 [2024-07-26 03:55:28.862952] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:14.134 [2024-07-26 03:55:28.862970] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:14.134 [2024-07-26 03:55:28.862987] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:14.134 [2024-07-26 03:55:28.863004] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:14.134 [2024-07-26 03:55:28.863022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.134 [2024-07-26 03:55:28.863039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:14.134 [2024-07-26 03:55:28.863057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.274 ms 00:27:14.134 [2024-07-26 03:55:28.863073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.134 [2024-07-26 03:55:28.904381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.134 [2024-07-26 03:55:28.904460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:14.134 [2024-07-26 03:55:28.904482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.216 ms 00:27:14.134 [2024-07-26 03:55:28.904494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.134 [2024-07-26 03:55:28.904620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.134 [2024-07-26 03:55:28.904636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:14.134 [2024-07-26 03:55:28.904648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:27:14.134 [2024-07-26 03:55:28.904659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.134 [2024-07-26 03:55:28.943705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.134 [2024-07-26 03:55:28.943786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:14.134 [2024-07-26 03:55:28.943808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.942 ms 00:27:14.134 [2024-07-26 03:55:28.943845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.134 [2024-07-26 03:55:28.943950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.134 [2024-07-26 03:55:28.943967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:14.134 [2024-07-26 03:55:28.943980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:14.134 [2024-07-26 03:55:28.944000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.134 [2024-07-26 03:55:28.944442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.134 [2024-07-26 03:55:28.944469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:14.134 [2024-07-26 03:55:28.944484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:27:14.134 [2024-07-26 03:55:28.944496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.134 [2024-07-26 03:55:28.944658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.134 [2024-07-26 03:55:28.944678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:14.134 [2024-07-26 03:55:28.944691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:27:14.134 [2024-07-26 03:55:28.944702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.134 [2024-07-26 03:55:28.962106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.134 [2024-07-26 03:55:28.962178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:14.134 [2024-07-26 03:55:28.962200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.371 ms 00:27:14.134 [2024-07-26 03:55:28.962217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.134 [2024-07-26 03:55:28.979045] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:14.134 [2024-07-26 03:55:28.979125] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:14.134 [2024-07-26 03:55:28.979148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.134 [2024-07-26 03:55:28.979161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:14.134 [2024-07-26 03:55:28.979178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.724 ms 00:27:14.134 [2024-07-26 03:55:28.979189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.134 [2024-07-26 03:55:29.010124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.134 [2024-07-26 03:55:29.010252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:14.135 [2024-07-26 03:55:29.010276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.846 ms 00:27:14.135 [2024-07-26 03:55:29.010289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.135 [2024-07-26 03:55:29.026766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.135 [2024-07-26 03:55:29.026874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:14.135 [2024-07-26 03:55:29.026895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.381 ms 00:27:14.135 [2024-07-26 03:55:29.026906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.393 [2024-07-26 03:55:29.044066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.393 [2024-07-26 03:55:29.044160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:14.393 [2024-07-26 03:55:29.044181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.069 ms 00:27:14.393 [2024-07-26 03:55:29.044195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.393 [2024-07-26 03:55:29.045133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.393 [2024-07-26 03:55:29.045174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:14.393 [2024-07-26 03:55:29.045191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:27:14.393 [2024-07-26 03:55:29.045203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.393 [2024-07-26 03:55:29.124998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.393 [2024-07-26 03:55:29.125091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:14.393 [2024-07-26 03:55:29.125115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.757 ms 00:27:14.393 [2024-07-26 03:55:29.125142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.393 [2024-07-26 03:55:29.139865] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:14.393 [2024-07-26 03:55:29.143406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.393 [2024-07-26 03:55:29.143503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:14.393 [2024-07-26 03:55:29.143540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.171 ms 00:27:14.393 [2024-07-26 03:55:29.143563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.393 [2024-07-26 03:55:29.143767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.393 [2024-07-26 03:55:29.143805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:14.393 [2024-07-26 03:55:29.143864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:14.393 [2024-07-26 03:55:29.143889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.393 [2024-07-26 03:55:29.146018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.393 [2024-07-26 03:55:29.146093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:14.393 [2024-07-26 03:55:29.146125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.013 ms 00:27:14.393 [2024-07-26 03:55:29.146149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.393 [2024-07-26 03:55:29.146221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.393 [2024-07-26 03:55:29.146251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:14.393 [2024-07-26 03:55:29.146275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:14.393 [2024-07-26 03:55:29.146295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.393 [2024-07-26 03:55:29.146395] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:14.393 [2024-07-26 03:55:29.146426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.393 [2024-07-26 03:55:29.146456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:14.393 [2024-07-26 03:55:29.146476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:27:14.393 [2024-07-26 03:55:29.146495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.393 [2024-07-26 03:55:29.193965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.393 [2024-07-26 03:55:29.194338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:14.393 [2024-07-26 03:55:29.194549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.407 ms 00:27:14.393 [2024-07-26 03:55:29.194837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.393 [2024-07-26 03:55:29.195195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.393 [2024-07-26 03:55:29.195251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:14.393 [2024-07-26 03:55:29.195281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:27:14.393 [2024-07-26 03:55:29.195304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.393 [2024-07-26 03:55:29.206146] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 374.538 ms, result 0 00:27:55.889  Copying: 26/1024 [MB] (26 MBps) Copying: 49/1024 [MB] (22 MBps) Copying: 75/1024 [MB] (26 MBps) Copying: 98/1024 [MB] (22 MBps) Copying: 123/1024 [MB] (24 MBps) Copying: 148/1024 [MB] (25 MBps) Copying: 171/1024 [MB] (23 MBps) Copying: 194/1024 [MB] (22 MBps) Copying: 217/1024 [MB] (22 MBps) Copying: 239/1024 [MB] (22 MBps) Copying: 263/1024 [MB] (23 MBps) Copying: 287/1024 [MB] (23 MBps) Copying: 311/1024 [MB] (24 MBps) Copying: 338/1024 [MB] (27 MBps) Copying: 365/1024 [MB] (26 MBps) Copying: 393/1024 [MB] (28 MBps) Copying: 419/1024 [MB] (26 MBps) Copying: 445/1024 [MB] (25 MBps) Copying: 470/1024 [MB] (25 MBps) Copying: 495/1024 [MB] (25 MBps) Copying: 520/1024 [MB] (24 MBps) Copying: 548/1024 [MB] (27 MBps) Copying: 573/1024 [MB] (25 MBps) Copying: 601/1024 [MB] (28 MBps) Copying: 628/1024 [MB] (27 MBps) Copying: 655/1024 [MB] (26 MBps) Copying: 679/1024 [MB] (23 MBps) Copying: 701/1024 [MB] (22 MBps) Copying: 725/1024 [MB] (23 MBps) Copying: 748/1024 [MB] (23 MBps) Copying: 771/1024 [MB] (23 MBps) Copying: 797/1024 [MB] (25 MBps) Copying: 825/1024 [MB] (28 MBps) Copying: 853/1024 [MB] (27 MBps) Copying: 879/1024 [MB] (26 MBps) Copying: 904/1024 [MB] (25 MBps) Copying: 932/1024 [MB] (28 MBps) Copying: 958/1024 [MB] (26 MBps) Copying: 981/1024 [MB] (23 MBps) Copying: 1005/1024 [MB] (23 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-26 03:56:10.636150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.889 [2024-07-26 03:56:10.637209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:55.889 [2024-07-26 03:56:10.637436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:55.889 [2024-07-26 03:56:10.637516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.889 [2024-07-26 03:56:10.637733] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:55.889 [2024-07-26 03:56:10.646601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.889 [2024-07-26 03:56:10.646987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:55.889 [2024-07-26 03:56:10.647177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.715 ms 00:27:55.889 [2024-07-26 03:56:10.647359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.889 [2024-07-26 03:56:10.648047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.890 [2024-07-26 03:56:10.648249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:55.890 [2024-07-26 03:56:10.648430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:27:55.890 [2024-07-26 03:56:10.648608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.890 [2024-07-26 03:56:10.657652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.890 [2024-07-26 03:56:10.658039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:55.890 [2024-07-26 03:56:10.658083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.801 ms 00:27:55.890 [2024-07-26 03:56:10.658105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.890 [2024-07-26 03:56:10.670194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.890 [2024-07-26 03:56:10.670316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:55.890 [2024-07-26 03:56:10.670352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.011 ms 00:27:55.890 [2024-07-26 03:56:10.670373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.890 [2024-07-26 03:56:10.717109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.890 [2024-07-26 03:56:10.717247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:55.890 [2024-07-26 03:56:10.717281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.537 ms 00:27:55.890 [2024-07-26 03:56:10.717302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.890 [2024-07-26 03:56:10.743045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.890 [2024-07-26 03:56:10.743167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:55.890 [2024-07-26 03:56:10.743224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.631 ms 00:27:55.890 [2024-07-26 03:56:10.743244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.148 [2024-07-26 03:56:10.839490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.148 [2024-07-26 03:56:10.839616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:56.148 [2024-07-26 03:56:10.839643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.124 ms 00:27:56.148 [2024-07-26 03:56:10.839656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.148 [2024-07-26 03:56:10.873074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.148 [2024-07-26 03:56:10.873177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:27:56.148 [2024-07-26 03:56:10.873200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.384 ms 00:27:56.148 [2024-07-26 03:56:10.873213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.148 [2024-07-26 03:56:10.906868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.148 [2024-07-26 03:56:10.906963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:27:56.148 [2024-07-26 03:56:10.906986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.554 ms 00:27:56.148 [2024-07-26 03:56:10.906999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.148 [2024-07-26 03:56:10.940245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.148 [2024-07-26 03:56:10.940345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:56.148 [2024-07-26 03:56:10.940368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.157 ms 00:27:56.148 [2024-07-26 03:56:10.940408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.148 [2024-07-26 03:56:10.973212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.148 [2024-07-26 03:56:10.973315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:56.148 [2024-07-26 03:56:10.973338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.644 ms 00:27:56.148 [2024-07-26 03:56:10.973351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.148 [2024-07-26 03:56:10.973433] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:56.148 [2024-07-26 03:56:10.973461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133888 / 261120 wr_cnt: 1 state: open 00:27:56.148 [2024-07-26 03:56:10.973476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.973488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.973500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.973513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.973525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.973537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.973549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.973561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.973572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.973585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.973596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.973608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.973620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.973632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.973644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.973656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.973668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.973680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.973691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.973703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.973715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.973727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.973739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.973751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.973763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.973778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.973790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.973802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.973814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.973853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.973867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.973895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.973914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.973931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.973949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.973970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.973983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.974006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.974018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.974029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.974041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.974053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.974065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.974077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.974088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.974100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.974112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.974124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.974135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.974147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.974159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.974172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.974183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.974195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.974206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.974218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.974230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.974242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.974254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.974266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:56.148 [2024-07-26 03:56:10.974278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:56.149 [2024-07-26 03:56:10.974289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:56.149 [2024-07-26 03:56:10.974301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:56.149 [2024-07-26 03:56:10.974314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:56.149 [2024-07-26 03:56:10.974326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:56.149 [2024-07-26 03:56:10.974338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:56.149 [2024-07-26 03:56:10.974349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:56.149 [2024-07-26 03:56:10.974362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:56.149 [2024-07-26 03:56:10.974374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:56.149 [2024-07-26 03:56:10.974392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:56.149 [2024-07-26 03:56:10.974414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:56.149 [2024-07-26 03:56:10.974434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:56.149 [2024-07-26 03:56:10.974455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:56.149 [2024-07-26 03:56:10.974468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:56.149 [2024-07-26 03:56:10.974479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:56.149 [2024-07-26 03:56:10.974491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:56.149 [2024-07-26 03:56:10.974503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:56.149 [2024-07-26 03:56:10.974515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:56.149 [2024-07-26 03:56:10.974526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:56.149 [2024-07-26 03:56:10.974538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:56.149 [2024-07-26 03:56:10.974549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:56.149 [2024-07-26 03:56:10.974561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:56.149 [2024-07-26 03:56:10.974573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:56.149 [2024-07-26 03:56:10.974585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:56.149 [2024-07-26 03:56:10.974597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:56.149 [2024-07-26 03:56:10.974608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:56.149 [2024-07-26 03:56:10.974620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:56.149 [2024-07-26 03:56:10.974631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:56.149 [2024-07-26 03:56:10.974643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:56.149 [2024-07-26 03:56:10.974655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:56.149 [2024-07-26 03:56:10.974666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:56.149 [2024-07-26 03:56:10.974678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:56.149 [2024-07-26 03:56:10.974690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:56.149 [2024-07-26 03:56:10.974702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:56.149 [2024-07-26 03:56:10.974713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:56.149 [2024-07-26 03:56:10.974727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:56.149 [2024-07-26 03:56:10.974740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:56.149 [2024-07-26 03:56:10.974751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:56.149 [2024-07-26 03:56:10.974763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:56.149 [2024-07-26 03:56:10.974785] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:56.149 [2024-07-26 03:56:10.974834] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 67f15440-6dde-4ab9-b185-89b32e8c0bb4 00:27:56.149 [2024-07-26 03:56:10.974853] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133888 00:27:56.149 [2024-07-26 03:56:10.974864] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 9408 00:27:56.149 [2024-07-26 03:56:10.974875] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 8448 00:27:56.149 [2024-07-26 03:56:10.974901] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.1136 00:27:56.149 [2024-07-26 03:56:10.974912] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:56.149 [2024-07-26 03:56:10.974924] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:56.149 [2024-07-26 03:56:10.974939] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:56.149 [2024-07-26 03:56:10.974949] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:56.149 [2024-07-26 03:56:10.974959] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:56.149 [2024-07-26 03:56:10.974973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.149 [2024-07-26 03:56:10.974984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:56.149 [2024-07-26 03:56:10.974996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.541 ms 00:27:56.149 [2024-07-26 03:56:10.975007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.149 [2024-07-26 03:56:10.992393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.149 [2024-07-26 03:56:10.992476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:56.149 [2024-07-26 03:56:10.992498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.302 ms 00:27:56.149 [2024-07-26 03:56:10.992549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.149 [2024-07-26 03:56:10.993089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.149 [2024-07-26 03:56:10.993125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:56.149 [2024-07-26 03:56:10.993142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.478 ms 00:27:56.149 [2024-07-26 03:56:10.993154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.149 [2024-07-26 03:56:11.031697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.149 [2024-07-26 03:56:11.031790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:56.149 [2024-07-26 03:56:11.031844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.149 [2024-07-26 03:56:11.031861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.149 [2024-07-26 03:56:11.031950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.149 [2024-07-26 03:56:11.031966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:56.149 [2024-07-26 03:56:11.031979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.149 [2024-07-26 03:56:11.031991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.149 [2024-07-26 03:56:11.032131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.149 [2024-07-26 03:56:11.032152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:56.149 [2024-07-26 03:56:11.032164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.149 [2024-07-26 03:56:11.032183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.149 [2024-07-26 03:56:11.032207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.149 [2024-07-26 03:56:11.032222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:56.149 [2024-07-26 03:56:11.032234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.149 [2024-07-26 03:56:11.032245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.407 [2024-07-26 03:56:11.134678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.407 [2024-07-26 03:56:11.134772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:56.407 [2024-07-26 03:56:11.134812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.407 [2024-07-26 03:56:11.134875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.407 [2024-07-26 03:56:11.242534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.407 [2024-07-26 03:56:11.242625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:56.407 [2024-07-26 03:56:11.242647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.407 [2024-07-26 03:56:11.242661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.407 [2024-07-26 03:56:11.242772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.407 [2024-07-26 03:56:11.242806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:56.407 [2024-07-26 03:56:11.242871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.407 [2024-07-26 03:56:11.242885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.407 [2024-07-26 03:56:11.242963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.407 [2024-07-26 03:56:11.242981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:56.407 [2024-07-26 03:56:11.242993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.407 [2024-07-26 03:56:11.243005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.407 [2024-07-26 03:56:11.243127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.407 [2024-07-26 03:56:11.243148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:56.407 [2024-07-26 03:56:11.243161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.407 [2024-07-26 03:56:11.243173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.408 [2024-07-26 03:56:11.243229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.408 [2024-07-26 03:56:11.243253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:56.408 [2024-07-26 03:56:11.243265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.408 [2024-07-26 03:56:11.243276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.408 [2024-07-26 03:56:11.243323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.408 [2024-07-26 03:56:11.243339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:56.408 [2024-07-26 03:56:11.243351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.408 [2024-07-26 03:56:11.243362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.408 [2024-07-26 03:56:11.243420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.408 [2024-07-26 03:56:11.243438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:56.408 [2024-07-26 03:56:11.243450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.408 [2024-07-26 03:56:11.243461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.408 [2024-07-26 03:56:11.243601] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 607.443 ms, result 0 00:27:57.780 00:27:57.780 00:27:57.780 03:56:12 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:00.307 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:00.307 03:56:14 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:28:00.307 03:56:14 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:28:00.307 03:56:14 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:00.307 03:56:14 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:00.307 03:56:14 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:00.307 03:56:14 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 81896 00:28:00.307 03:56:14 ftl.ftl_restore -- common/autotest_common.sh@948 -- # '[' -z 81896 ']' 00:28:00.307 03:56:14 ftl.ftl_restore -- common/autotest_common.sh@952 -- # kill -0 81896 00:28:00.307 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (81896) - No such process 00:28:00.307 Process with pid 81896 is not found 00:28:00.307 03:56:14 ftl.ftl_restore -- common/autotest_common.sh@975 -- # echo 'Process with pid 81896 is not found' 00:28:00.307 03:56:14 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:28:00.307 Remove shared memory files 00:28:00.307 03:56:14 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:00.307 03:56:14 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:28:00.307 03:56:14 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:28:00.307 03:56:14 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:28:00.307 03:56:14 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:00.307 03:56:14 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:28:00.307 ************************************ 00:28:00.307 END TEST ftl_restore 00:28:00.307 ************************************ 00:28:00.307 00:28:00.307 real 3m11.088s 00:28:00.307 user 2m54.518s 00:28:00.307 sys 0m17.962s 00:28:00.307 03:56:14 ftl.ftl_restore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:00.307 03:56:14 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:28:00.307 03:56:14 ftl -- common/autotest_common.sh@1142 -- # return 0 00:28:00.307 03:56:14 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:28:00.307 03:56:14 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:28:00.307 03:56:14 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:00.307 03:56:14 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:00.307 ************************************ 00:28:00.307 START TEST ftl_dirty_shutdown 00:28:00.307 ************************************ 00:28:00.307 03:56:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:28:00.307 * Looking for test storage... 00:28:00.307 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=83868 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 83868 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@829 -- # '[' -z 83868 ']' 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:00.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:00.307 03:56:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:00.307 [2024-07-26 03:56:15.172106] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:28:00.307 [2024-07-26 03:56:15.172334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83868 ] 00:28:00.578 [2024-07-26 03:56:15.354338] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.835 [2024-07-26 03:56:15.554545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.400 03:56:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:01.400 03:56:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@862 -- # return 0 00:28:01.400 03:56:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:01.400 03:56:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:28:01.400 03:56:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:01.400 03:56:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:28:01.400 03:56:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:28:01.400 03:56:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:01.966 03:56:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:01.966 03:56:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:28:01.966 03:56:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:01.966 03:56:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:28:01.966 03:56:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:01.966 03:56:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:28:01.966 03:56:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:28:01.966 03:56:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:02.224 03:56:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:02.224 { 00:28:02.224 "name": "nvme0n1", 00:28:02.224 "aliases": [ 00:28:02.224 "f629c4fc-73c7-4726-b086-409531ea932c" 00:28:02.224 ], 00:28:02.224 "product_name": "NVMe disk", 00:28:02.224 "block_size": 4096, 00:28:02.224 "num_blocks": 1310720, 00:28:02.224 "uuid": "f629c4fc-73c7-4726-b086-409531ea932c", 00:28:02.224 "assigned_rate_limits": { 00:28:02.224 "rw_ios_per_sec": 0, 00:28:02.224 "rw_mbytes_per_sec": 0, 00:28:02.224 "r_mbytes_per_sec": 0, 00:28:02.224 "w_mbytes_per_sec": 0 00:28:02.224 }, 00:28:02.224 "claimed": true, 00:28:02.224 "claim_type": "read_many_write_one", 00:28:02.224 "zoned": false, 00:28:02.224 "supported_io_types": { 00:28:02.224 "read": true, 00:28:02.224 "write": true, 00:28:02.224 "unmap": true, 00:28:02.224 "flush": true, 00:28:02.224 "reset": true, 00:28:02.224 "nvme_admin": true, 00:28:02.224 "nvme_io": true, 00:28:02.224 "nvme_io_md": false, 00:28:02.224 "write_zeroes": true, 00:28:02.224 "zcopy": false, 00:28:02.224 "get_zone_info": false, 00:28:02.224 "zone_management": false, 00:28:02.224 "zone_append": false, 00:28:02.224 "compare": true, 00:28:02.224 "compare_and_write": false, 00:28:02.224 "abort": true, 00:28:02.224 "seek_hole": false, 00:28:02.224 "seek_data": false, 00:28:02.224 "copy": true, 00:28:02.224 "nvme_iov_md": false 00:28:02.224 }, 00:28:02.224 "driver_specific": { 00:28:02.224 "nvme": [ 00:28:02.224 { 00:28:02.224 "pci_address": "0000:00:11.0", 00:28:02.224 "trid": { 00:28:02.224 "trtype": "PCIe", 00:28:02.224 "traddr": "0000:00:11.0" 00:28:02.225 }, 00:28:02.225 "ctrlr_data": { 00:28:02.225 "cntlid": 0, 00:28:02.225 "vendor_id": "0x1b36", 00:28:02.225 "model_number": "QEMU NVMe Ctrl", 00:28:02.225 "serial_number": "12341", 00:28:02.225 "firmware_revision": "8.0.0", 00:28:02.225 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:02.225 "oacs": { 00:28:02.225 "security": 0, 00:28:02.225 "format": 1, 00:28:02.225 "firmware": 0, 00:28:02.225 "ns_manage": 1 00:28:02.225 }, 00:28:02.225 "multi_ctrlr": false, 00:28:02.225 "ana_reporting": false 00:28:02.225 }, 00:28:02.225 "vs": { 00:28:02.225 "nvme_version": "1.4" 00:28:02.225 }, 00:28:02.225 "ns_data": { 00:28:02.225 "id": 1, 00:28:02.225 "can_share": false 00:28:02.225 } 00:28:02.225 } 00:28:02.225 ], 00:28:02.225 "mp_policy": "active_passive" 00:28:02.225 } 00:28:02.225 } 00:28:02.225 ]' 00:28:02.225 03:56:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:02.225 03:56:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:28:02.225 03:56:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:02.225 03:56:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:28:02.225 03:56:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:28:02.225 03:56:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:28:02.225 03:56:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:28:02.225 03:56:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:02.225 03:56:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:28:02.225 03:56:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:02.225 03:56:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:02.483 03:56:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=97e62318-e8f9-41d7-bc1a-29ea2404daba 00:28:02.483 03:56:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:28:02.483 03:56:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 97e62318-e8f9-41d7-bc1a-29ea2404daba 00:28:03.051 03:56:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:03.051 03:56:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=505f29ab-a1ee-42e5-83a1-ec9899d5a7a3 00:28:03.051 03:56:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 505f29ab-a1ee-42e5-83a1-ec9899d5a7a3 00:28:03.309 03:56:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=8f74c361-0b7c-4a5c-9567-5dc36773d2de 00:28:03.309 03:56:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:28:03.309 03:56:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 8f74c361-0b7c-4a5c-9567-5dc36773d2de 00:28:03.309 03:56:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:28:03.309 03:56:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:03.309 03:56:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=8f74c361-0b7c-4a5c-9567-5dc36773d2de 00:28:03.309 03:56:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:28:03.309 03:56:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 8f74c361-0b7c-4a5c-9567-5dc36773d2de 00:28:03.309 03:56:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=8f74c361-0b7c-4a5c-9567-5dc36773d2de 00:28:03.309 03:56:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:03.309 03:56:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:28:03.309 03:56:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:28:03.309 03:56:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8f74c361-0b7c-4a5c-9567-5dc36773d2de 00:28:03.583 03:56:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:03.583 { 00:28:03.583 "name": "8f74c361-0b7c-4a5c-9567-5dc36773d2de", 00:28:03.583 "aliases": [ 00:28:03.583 "lvs/nvme0n1p0" 00:28:03.583 ], 00:28:03.583 "product_name": "Logical Volume", 00:28:03.583 "block_size": 4096, 00:28:03.583 "num_blocks": 26476544, 00:28:03.583 "uuid": "8f74c361-0b7c-4a5c-9567-5dc36773d2de", 00:28:03.583 "assigned_rate_limits": { 00:28:03.583 "rw_ios_per_sec": 0, 00:28:03.583 "rw_mbytes_per_sec": 0, 00:28:03.583 "r_mbytes_per_sec": 0, 00:28:03.583 "w_mbytes_per_sec": 0 00:28:03.583 }, 00:28:03.583 "claimed": false, 00:28:03.583 "zoned": false, 00:28:03.583 "supported_io_types": { 00:28:03.583 "read": true, 00:28:03.583 "write": true, 00:28:03.583 "unmap": true, 00:28:03.583 "flush": false, 00:28:03.583 "reset": true, 00:28:03.583 "nvme_admin": false, 00:28:03.583 "nvme_io": false, 00:28:03.583 "nvme_io_md": false, 00:28:03.583 "write_zeroes": true, 00:28:03.583 "zcopy": false, 00:28:03.584 "get_zone_info": false, 00:28:03.584 "zone_management": false, 00:28:03.584 "zone_append": false, 00:28:03.584 "compare": false, 00:28:03.584 "compare_and_write": false, 00:28:03.584 "abort": false, 00:28:03.584 "seek_hole": true, 00:28:03.584 "seek_data": true, 00:28:03.584 "copy": false, 00:28:03.584 "nvme_iov_md": false 00:28:03.584 }, 00:28:03.584 "driver_specific": { 00:28:03.584 "lvol": { 00:28:03.584 "lvol_store_uuid": "505f29ab-a1ee-42e5-83a1-ec9899d5a7a3", 00:28:03.584 "base_bdev": "nvme0n1", 00:28:03.584 "thin_provision": true, 00:28:03.584 "num_allocated_clusters": 0, 00:28:03.584 "snapshot": false, 00:28:03.584 "clone": false, 00:28:03.584 "esnap_clone": false 00:28:03.584 } 00:28:03.584 } 00:28:03.584 } 00:28:03.584 ]' 00:28:03.584 03:56:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:03.850 03:56:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:28:03.850 03:56:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:03.850 03:56:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:03.850 03:56:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:03.850 03:56:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:28:03.850 03:56:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:28:03.850 03:56:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:28:03.850 03:56:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:04.108 03:56:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:04.108 03:56:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:04.108 03:56:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 8f74c361-0b7c-4a5c-9567-5dc36773d2de 00:28:04.108 03:56:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=8f74c361-0b7c-4a5c-9567-5dc36773d2de 00:28:04.108 03:56:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:04.108 03:56:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:28:04.108 03:56:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:28:04.108 03:56:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8f74c361-0b7c-4a5c-9567-5dc36773d2de 00:28:04.366 03:56:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:04.366 { 00:28:04.366 "name": "8f74c361-0b7c-4a5c-9567-5dc36773d2de", 00:28:04.366 "aliases": [ 00:28:04.366 "lvs/nvme0n1p0" 00:28:04.367 ], 00:28:04.367 "product_name": "Logical Volume", 00:28:04.367 "block_size": 4096, 00:28:04.367 "num_blocks": 26476544, 00:28:04.367 "uuid": "8f74c361-0b7c-4a5c-9567-5dc36773d2de", 00:28:04.367 "assigned_rate_limits": { 00:28:04.367 "rw_ios_per_sec": 0, 00:28:04.367 "rw_mbytes_per_sec": 0, 00:28:04.367 "r_mbytes_per_sec": 0, 00:28:04.367 "w_mbytes_per_sec": 0 00:28:04.367 }, 00:28:04.367 "claimed": false, 00:28:04.367 "zoned": false, 00:28:04.367 "supported_io_types": { 00:28:04.367 "read": true, 00:28:04.367 "write": true, 00:28:04.367 "unmap": true, 00:28:04.367 "flush": false, 00:28:04.367 "reset": true, 00:28:04.367 "nvme_admin": false, 00:28:04.367 "nvme_io": false, 00:28:04.367 "nvme_io_md": false, 00:28:04.367 "write_zeroes": true, 00:28:04.367 "zcopy": false, 00:28:04.367 "get_zone_info": false, 00:28:04.367 "zone_management": false, 00:28:04.367 "zone_append": false, 00:28:04.367 "compare": false, 00:28:04.367 "compare_and_write": false, 00:28:04.367 "abort": false, 00:28:04.367 "seek_hole": true, 00:28:04.367 "seek_data": true, 00:28:04.367 "copy": false, 00:28:04.367 "nvme_iov_md": false 00:28:04.367 }, 00:28:04.367 "driver_specific": { 00:28:04.367 "lvol": { 00:28:04.367 "lvol_store_uuid": "505f29ab-a1ee-42e5-83a1-ec9899d5a7a3", 00:28:04.367 "base_bdev": "nvme0n1", 00:28:04.367 "thin_provision": true, 00:28:04.367 "num_allocated_clusters": 0, 00:28:04.367 "snapshot": false, 00:28:04.367 "clone": false, 00:28:04.367 "esnap_clone": false 00:28:04.367 } 00:28:04.367 } 00:28:04.367 } 00:28:04.367 ]' 00:28:04.367 03:56:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:04.367 03:56:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:28:04.367 03:56:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:04.367 03:56:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:04.367 03:56:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:04.367 03:56:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:28:04.367 03:56:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:28:04.367 03:56:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:04.934 03:56:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:28:04.934 03:56:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 8f74c361-0b7c-4a5c-9567-5dc36773d2de 00:28:04.934 03:56:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=8f74c361-0b7c-4a5c-9567-5dc36773d2de 00:28:04.934 03:56:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:04.934 03:56:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:28:04.934 03:56:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:28:04.934 03:56:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8f74c361-0b7c-4a5c-9567-5dc36773d2de 00:28:04.934 03:56:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:04.934 { 00:28:04.934 "name": "8f74c361-0b7c-4a5c-9567-5dc36773d2de", 00:28:04.934 "aliases": [ 00:28:04.934 "lvs/nvme0n1p0" 00:28:04.934 ], 00:28:04.934 "product_name": "Logical Volume", 00:28:04.934 "block_size": 4096, 00:28:04.934 "num_blocks": 26476544, 00:28:04.934 "uuid": "8f74c361-0b7c-4a5c-9567-5dc36773d2de", 00:28:04.934 "assigned_rate_limits": { 00:28:04.934 "rw_ios_per_sec": 0, 00:28:04.934 "rw_mbytes_per_sec": 0, 00:28:04.934 "r_mbytes_per_sec": 0, 00:28:04.934 "w_mbytes_per_sec": 0 00:28:04.934 }, 00:28:04.934 "claimed": false, 00:28:04.934 "zoned": false, 00:28:04.934 "supported_io_types": { 00:28:04.934 "read": true, 00:28:04.934 "write": true, 00:28:04.934 "unmap": true, 00:28:04.934 "flush": false, 00:28:04.934 "reset": true, 00:28:04.934 "nvme_admin": false, 00:28:04.934 "nvme_io": false, 00:28:04.934 "nvme_io_md": false, 00:28:04.934 "write_zeroes": true, 00:28:04.934 "zcopy": false, 00:28:04.934 "get_zone_info": false, 00:28:04.934 "zone_management": false, 00:28:04.934 "zone_append": false, 00:28:04.934 "compare": false, 00:28:04.934 "compare_and_write": false, 00:28:04.934 "abort": false, 00:28:04.934 "seek_hole": true, 00:28:04.934 "seek_data": true, 00:28:04.934 "copy": false, 00:28:04.934 "nvme_iov_md": false 00:28:04.934 }, 00:28:04.934 "driver_specific": { 00:28:04.934 "lvol": { 00:28:04.934 "lvol_store_uuid": "505f29ab-a1ee-42e5-83a1-ec9899d5a7a3", 00:28:04.934 "base_bdev": "nvme0n1", 00:28:04.934 "thin_provision": true, 00:28:04.934 "num_allocated_clusters": 0, 00:28:04.934 "snapshot": false, 00:28:04.934 "clone": false, 00:28:04.934 "esnap_clone": false 00:28:04.934 } 00:28:04.934 } 00:28:04.934 } 00:28:04.934 ]' 00:28:04.934 03:56:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:04.934 03:56:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:28:04.934 03:56:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:05.193 03:56:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:05.193 03:56:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:05.193 03:56:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:28:05.194 03:56:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:28:05.194 03:56:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 8f74c361-0b7c-4a5c-9567-5dc36773d2de --l2p_dram_limit 10' 00:28:05.194 03:56:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:28:05.194 03:56:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:28:05.194 03:56:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:05.194 03:56:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8f74c361-0b7c-4a5c-9567-5dc36773d2de --l2p_dram_limit 10 -c nvc0n1p0 00:28:05.452 [2024-07-26 03:56:20.098258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.452 [2024-07-26 03:56:20.098329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:05.452 [2024-07-26 03:56:20.098353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:05.452 [2024-07-26 03:56:20.098367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.452 [2024-07-26 03:56:20.098457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.452 [2024-07-26 03:56:20.098478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:05.452 [2024-07-26 03:56:20.098492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:28:05.452 [2024-07-26 03:56:20.098506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.452 [2024-07-26 03:56:20.098535] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:05.452 [2024-07-26 03:56:20.099558] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:05.452 [2024-07-26 03:56:20.099595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.452 [2024-07-26 03:56:20.099613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:05.452 [2024-07-26 03:56:20.099626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.067 ms 00:28:05.452 [2024-07-26 03:56:20.099640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.452 [2024-07-26 03:56:20.099738] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4060d2b9-ed7f-4848-ac63-bd5cd8a344c5 00:28:05.452 [2024-07-26 03:56:20.100844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.452 [2024-07-26 03:56:20.100885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:05.452 [2024-07-26 03:56:20.100904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:28:05.452 [2024-07-26 03:56:20.100918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.452 [2024-07-26 03:56:20.105595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.452 [2024-07-26 03:56:20.105649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:05.452 [2024-07-26 03:56:20.105669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.589 ms 00:28:05.452 [2024-07-26 03:56:20.105682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.452 [2024-07-26 03:56:20.105829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.452 [2024-07-26 03:56:20.105853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:05.452 [2024-07-26 03:56:20.105869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:28:05.452 [2024-07-26 03:56:20.105882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.453 [2024-07-26 03:56:20.105961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.453 [2024-07-26 03:56:20.105979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:05.453 [2024-07-26 03:56:20.105998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:28:05.453 [2024-07-26 03:56:20.106010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.453 [2024-07-26 03:56:20.106045] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:05.453 [2024-07-26 03:56:20.110667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.453 [2024-07-26 03:56:20.110715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:05.453 [2024-07-26 03:56:20.110732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.634 ms 00:28:05.453 [2024-07-26 03:56:20.110746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.453 [2024-07-26 03:56:20.110793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.453 [2024-07-26 03:56:20.110837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:05.453 [2024-07-26 03:56:20.110853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:05.453 [2024-07-26 03:56:20.110868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.453 [2024-07-26 03:56:20.110947] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:05.453 [2024-07-26 03:56:20.111114] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:05.453 [2024-07-26 03:56:20.111134] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:05.453 [2024-07-26 03:56:20.111155] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:28:05.453 [2024-07-26 03:56:20.111171] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:05.453 [2024-07-26 03:56:20.111187] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:05.453 [2024-07-26 03:56:20.111200] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:05.453 [2024-07-26 03:56:20.111218] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:05.453 [2024-07-26 03:56:20.111231] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:05.453 [2024-07-26 03:56:20.111243] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:05.453 [2024-07-26 03:56:20.111256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.453 [2024-07-26 03:56:20.111269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:05.453 [2024-07-26 03:56:20.111282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:28:05.453 [2024-07-26 03:56:20.111296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.453 [2024-07-26 03:56:20.111389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.453 [2024-07-26 03:56:20.111407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:05.453 [2024-07-26 03:56:20.111419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:28:05.453 [2024-07-26 03:56:20.111437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.453 [2024-07-26 03:56:20.111544] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:05.453 [2024-07-26 03:56:20.111566] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:05.453 [2024-07-26 03:56:20.111591] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:05.453 [2024-07-26 03:56:20.111606] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:05.453 [2024-07-26 03:56:20.111618] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:05.453 [2024-07-26 03:56:20.111631] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:05.453 [2024-07-26 03:56:20.111643] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:05.453 [2024-07-26 03:56:20.111656] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:05.453 [2024-07-26 03:56:20.111667] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:05.453 [2024-07-26 03:56:20.111679] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:05.453 [2024-07-26 03:56:20.111690] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:05.453 [2024-07-26 03:56:20.111705] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:05.453 [2024-07-26 03:56:20.111717] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:05.453 [2024-07-26 03:56:20.111730] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:05.453 [2024-07-26 03:56:20.111741] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:05.453 [2024-07-26 03:56:20.111754] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:05.453 [2024-07-26 03:56:20.111765] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:05.453 [2024-07-26 03:56:20.111781] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:05.453 [2024-07-26 03:56:20.111792] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:05.453 [2024-07-26 03:56:20.111805] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:05.453 [2024-07-26 03:56:20.111832] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:05.453 [2024-07-26 03:56:20.111849] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:05.453 [2024-07-26 03:56:20.111861] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:05.453 [2024-07-26 03:56:20.111873] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:05.453 [2024-07-26 03:56:20.111884] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:05.453 [2024-07-26 03:56:20.111897] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:05.453 [2024-07-26 03:56:20.111908] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:05.453 [2024-07-26 03:56:20.111920] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:05.453 [2024-07-26 03:56:20.111931] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:05.453 [2024-07-26 03:56:20.111944] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:05.453 [2024-07-26 03:56:20.111955] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:05.453 [2024-07-26 03:56:20.111967] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:05.453 [2024-07-26 03:56:20.111978] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:05.453 [2024-07-26 03:56:20.111993] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:05.453 [2024-07-26 03:56:20.112004] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:05.453 [2024-07-26 03:56:20.112017] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:05.453 [2024-07-26 03:56:20.112028] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:05.453 [2024-07-26 03:56:20.112042] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:05.453 [2024-07-26 03:56:20.112053] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:05.453 [2024-07-26 03:56:20.112066] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:05.453 [2024-07-26 03:56:20.112077] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:05.453 [2024-07-26 03:56:20.112091] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:05.453 [2024-07-26 03:56:20.112102] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:05.453 [2024-07-26 03:56:20.112115] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:05.453 [2024-07-26 03:56:20.112127] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:05.453 [2024-07-26 03:56:20.112140] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:05.453 [2024-07-26 03:56:20.112152] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:05.453 [2024-07-26 03:56:20.112165] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:05.453 [2024-07-26 03:56:20.112177] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:05.453 [2024-07-26 03:56:20.112191] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:05.453 [2024-07-26 03:56:20.112203] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:05.453 [2024-07-26 03:56:20.112216] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:05.453 [2024-07-26 03:56:20.112227] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:05.453 [2024-07-26 03:56:20.112245] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:05.453 [2024-07-26 03:56:20.112262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:05.453 [2024-07-26 03:56:20.112278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:05.453 [2024-07-26 03:56:20.112290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:05.453 [2024-07-26 03:56:20.112303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:05.453 [2024-07-26 03:56:20.112315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:05.453 [2024-07-26 03:56:20.112329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:05.453 [2024-07-26 03:56:20.112340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:05.453 [2024-07-26 03:56:20.112355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:05.453 [2024-07-26 03:56:20.112367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:05.453 [2024-07-26 03:56:20.112380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:05.453 [2024-07-26 03:56:20.112393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:05.453 [2024-07-26 03:56:20.112408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:05.453 [2024-07-26 03:56:20.112420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:05.453 [2024-07-26 03:56:20.112434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:05.454 [2024-07-26 03:56:20.112446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:05.454 [2024-07-26 03:56:20.112460] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:05.454 [2024-07-26 03:56:20.112473] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:05.454 [2024-07-26 03:56:20.112487] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:05.454 [2024-07-26 03:56:20.112499] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:05.454 [2024-07-26 03:56:20.112513] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:05.454 [2024-07-26 03:56:20.112526] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:05.454 [2024-07-26 03:56:20.112541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.454 [2024-07-26 03:56:20.112553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:05.454 [2024-07-26 03:56:20.112568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.061 ms 00:28:05.454 [2024-07-26 03:56:20.112579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.454 [2024-07-26 03:56:20.112634] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:05.454 [2024-07-26 03:56:20.112651] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:07.355 [2024-07-26 03:56:22.157062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.355 [2024-07-26 03:56:22.157170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:07.355 [2024-07-26 03:56:22.157200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2044.423 ms 00:28:07.355 [2024-07-26 03:56:22.157215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.355 [2024-07-26 03:56:22.191380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.355 [2024-07-26 03:56:22.191458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:07.355 [2024-07-26 03:56:22.191485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.809 ms 00:28:07.355 [2024-07-26 03:56:22.191499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.355 [2024-07-26 03:56:22.191700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.355 [2024-07-26 03:56:22.191723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:07.355 [2024-07-26 03:56:22.191754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:07.355 [2024-07-26 03:56:22.191768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.355 [2024-07-26 03:56:22.231361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.355 [2024-07-26 03:56:22.231439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:07.355 [2024-07-26 03:56:22.231465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.511 ms 00:28:07.355 [2024-07-26 03:56:22.231480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.355 [2024-07-26 03:56:22.231556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.355 [2024-07-26 03:56:22.231572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:07.355 [2024-07-26 03:56:22.231594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:07.355 [2024-07-26 03:56:22.231607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.355 [2024-07-26 03:56:22.232089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.355 [2024-07-26 03:56:22.232116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:07.355 [2024-07-26 03:56:22.232142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:28:07.355 [2024-07-26 03:56:22.232155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.355 [2024-07-26 03:56:22.232310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.355 [2024-07-26 03:56:22.232331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:07.355 [2024-07-26 03:56:22.232347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:28:07.355 [2024-07-26 03:56:22.232360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.355 [2024-07-26 03:56:22.250616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.355 [2024-07-26 03:56:22.250691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:07.355 [2024-07-26 03:56:22.250717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.221 ms 00:28:07.355 [2024-07-26 03:56:22.250731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.614 [2024-07-26 03:56:22.264765] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:07.614 [2024-07-26 03:56:22.267748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.614 [2024-07-26 03:56:22.267831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:07.614 [2024-07-26 03:56:22.267855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.814 ms 00:28:07.614 [2024-07-26 03:56:22.267871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.614 [2024-07-26 03:56:22.342583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.614 [2024-07-26 03:56:22.342699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:07.614 [2024-07-26 03:56:22.342726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.623 ms 00:28:07.614 [2024-07-26 03:56:22.342742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.614 [2024-07-26 03:56:22.343029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.614 [2024-07-26 03:56:22.343053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:07.614 [2024-07-26 03:56:22.343069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:28:07.614 [2024-07-26 03:56:22.343087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.614 [2024-07-26 03:56:22.376894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.614 [2024-07-26 03:56:22.377004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:07.614 [2024-07-26 03:56:22.377029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.667 ms 00:28:07.614 [2024-07-26 03:56:22.377050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.614 [2024-07-26 03:56:22.410509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.614 [2024-07-26 03:56:22.410628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:07.614 [2024-07-26 03:56:22.410652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.382 ms 00:28:07.614 [2024-07-26 03:56:22.410668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.614 [2024-07-26 03:56:22.411459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.614 [2024-07-26 03:56:22.411499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:07.614 [2024-07-26 03:56:22.411518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.729 ms 00:28:07.614 [2024-07-26 03:56:22.411534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.614 [2024-07-26 03:56:22.502560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.614 [2024-07-26 03:56:22.502658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:07.614 [2024-07-26 03:56:22.502681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.916 ms 00:28:07.614 [2024-07-26 03:56:22.502700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.875 [2024-07-26 03:56:22.545979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.875 [2024-07-26 03:56:22.546108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:07.875 [2024-07-26 03:56:22.546146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.177 ms 00:28:07.875 [2024-07-26 03:56:22.546178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.875 [2024-07-26 03:56:22.592415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.875 [2024-07-26 03:56:22.592535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:07.875 [2024-07-26 03:56:22.592570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.058 ms 00:28:07.875 [2024-07-26 03:56:22.592596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.875 [2024-07-26 03:56:22.639160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.875 [2024-07-26 03:56:22.639286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:07.875 [2024-07-26 03:56:22.639321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.439 ms 00:28:07.875 [2024-07-26 03:56:22.639354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.875 [2024-07-26 03:56:22.639555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.875 [2024-07-26 03:56:22.639595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:07.875 [2024-07-26 03:56:22.639620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:07.875 [2024-07-26 03:56:22.639649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.875 [2024-07-26 03:56:22.639888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.875 [2024-07-26 03:56:22.639935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:07.875 [2024-07-26 03:56:22.639959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:28:07.875 [2024-07-26 03:56:22.639985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.875 [2024-07-26 03:56:22.641483] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2542.490 ms, result 0 00:28:07.875 { 00:28:07.875 "name": "ftl0", 00:28:07.875 "uuid": "4060d2b9-ed7f-4848-ac63-bd5cd8a344c5" 00:28:07.875 } 00:28:07.875 03:56:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:28:07.875 03:56:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:28:08.134 03:56:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:28:08.134 03:56:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:28:08.392 03:56:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:28:08.649 /dev/nbd0 00:28:08.649 03:56:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:28:08.649 03:56:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:28:08.649 03:56:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@867 -- # local i 00:28:08.649 03:56:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:08.649 03:56:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:08.649 03:56:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:28:08.649 03:56:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # break 00:28:08.649 03:56:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:08.649 03:56:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:08.649 03:56:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:28:08.649 1+0 records in 00:28:08.649 1+0 records out 00:28:08.649 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000491145 s, 8.3 MB/s 00:28:08.649 03:56:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:28:08.649 03:56:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # size=4096 00:28:08.649 03:56:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:28:08.649 03:56:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:08.649 03:56:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # return 0 00:28:08.649 03:56:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:28:08.649 [2024-07-26 03:56:23.424334] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:28:08.649 [2024-07-26 03:56:23.424495] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84005 ] 00:28:08.906 [2024-07-26 03:56:23.588806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.906 [2024-07-26 03:56:23.778455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.450  Copying: 156/1024 [MB] (156 MBps) Copying: 310/1024 [MB] (154 MBps) Copying: 452/1024 [MB] (142 MBps) Copying: 607/1024 [MB] (154 MBps) Copying: 752/1024 [MB] (145 MBps) Copying: 900/1024 [MB] (147 MBps) Copying: 1024/1024 [MB] (average 150 MBps) 00:28:17.450 00:28:17.450 03:56:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:19.980 03:56:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:28:19.980 [2024-07-26 03:56:34.572537] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:28:19.980 [2024-07-26 03:56:34.572705] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84115 ] 00:28:19.980 [2024-07-26 03:56:34.734336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.238 [2024-07-26 03:56:34.922159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.554  Copying: 16/1024 [MB] (16 MBps) Copying: 31/1024 [MB] (14 MBps) Copying: 47/1024 [MB] (15 MBps) Copying: 61/1024 [MB] (14 MBps) Copying: 76/1024 [MB] (14 MBps) Copying: 91/1024 [MB] (15 MBps) Copying: 107/1024 [MB] (15 MBps) Copying: 123/1024 [MB] (16 MBps) Copying: 140/1024 [MB] (16 MBps) Copying: 157/1024 [MB] (16 MBps) Copying: 173/1024 [MB] (16 MBps) Copying: 190/1024 [MB] (16 MBps) Copying: 207/1024 [MB] (17 MBps) Copying: 225/1024 [MB] (17 MBps) Copying: 241/1024 [MB] (16 MBps) Copying: 258/1024 [MB] (16 MBps) Copying: 275/1024 [MB] (16 MBps) Copying: 292/1024 [MB] (17 MBps) Copying: 310/1024 [MB] (17 MBps) Copying: 326/1024 [MB] (16 MBps) Copying: 344/1024 [MB] (17 MBps) Copying: 360/1024 [MB] (16 MBps) Copying: 377/1024 [MB] (17 MBps) Copying: 395/1024 [MB] (17 MBps) Copying: 412/1024 [MB] (17 MBps) Copying: 430/1024 [MB] (17 MBps) Copying: 447/1024 [MB] (17 MBps) Copying: 465/1024 [MB] (17 MBps) Copying: 483/1024 [MB] (17 MBps) Copying: 500/1024 [MB] (17 MBps) Copying: 518/1024 [MB] (17 MBps) Copying: 536/1024 [MB] (17 MBps) Copying: 553/1024 [MB] (17 MBps) Copying: 571/1024 [MB] (17 MBps) Copying: 588/1024 [MB] (17 MBps) Copying: 606/1024 [MB] (17 MBps) Copying: 623/1024 [MB] (17 MBps) Copying: 640/1024 [MB] (17 MBps) Copying: 658/1024 [MB] (17 MBps) Copying: 675/1024 [MB] (17 MBps) Copying: 692/1024 [MB] (17 MBps) Copying: 710/1024 [MB] (17 MBps) Copying: 728/1024 [MB] (17 MBps) Copying: 744/1024 [MB] (16 MBps) Copying: 761/1024 [MB] (16 MBps) Copying: 778/1024 [MB] (17 MBps) Copying: 796/1024 [MB] (17 MBps) Copying: 813/1024 [MB] (17 MBps) Copying: 830/1024 [MB] (16 MBps) Copying: 846/1024 [MB] (16 MBps) Copying: 863/1024 [MB] (16 MBps) Copying: 879/1024 [MB] (16 MBps) Copying: 896/1024 [MB] (16 MBps) Copying: 912/1024 [MB] (16 MBps) Copying: 928/1024 [MB] (16 MBps) Copying: 945/1024 [MB] (16 MBps) Copying: 960/1024 [MB] (15 MBps) Copying: 976/1024 [MB] (16 MBps) Copying: 992/1024 [MB] (16 MBps) Copying: 1009/1024 [MB] (16 MBps) Copying: 1024/1024 [MB] (average 16 MBps) 00:29:22.554 00:29:22.554 03:57:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:29:22.554 03:57:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:29:22.813 03:57:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:29:23.105 [2024-07-26 03:57:37.922619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.105 [2024-07-26 03:57:37.922713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:23.105 [2024-07-26 03:57:37.922796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:23.105 [2024-07-26 03:57:37.922809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.105 [2024-07-26 03:57:37.922848] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:23.106 [2024-07-26 03:57:37.926787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.106 [2024-07-26 03:57:37.926870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:23.106 [2024-07-26 03:57:37.926923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.641 ms 00:29:23.106 [2024-07-26 03:57:37.926942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.106 [2024-07-26 03:57:37.928638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.106 [2024-07-26 03:57:37.928722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:23.106 [2024-07-26 03:57:37.928756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.662 ms 00:29:23.106 [2024-07-26 03:57:37.928775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.106 [2024-07-26 03:57:37.946064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.106 [2024-07-26 03:57:37.946118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:23.106 [2024-07-26 03:57:37.946138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.262 ms 00:29:23.106 [2024-07-26 03:57:37.946153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.106 [2024-07-26 03:57:37.952925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.106 [2024-07-26 03:57:37.952967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:23.106 [2024-07-26 03:57:37.952986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.726 ms 00:29:23.106 [2024-07-26 03:57:37.953001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.106 [2024-07-26 03:57:37.985445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.106 [2024-07-26 03:57:37.985521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:23.106 [2024-07-26 03:57:37.985542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.328 ms 00:29:23.106 [2024-07-26 03:57:37.985557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.106 [2024-07-26 03:57:38.006069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.106 [2024-07-26 03:57:38.006146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:23.106 [2024-07-26 03:57:38.006167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.448 ms 00:29:23.106 [2024-07-26 03:57:38.006182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.106 [2024-07-26 03:57:38.006382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.106 [2024-07-26 03:57:38.006411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:23.106 [2024-07-26 03:57:38.006456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:29:23.106 [2024-07-26 03:57:38.006471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.366 [2024-07-26 03:57:38.040628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.366 [2024-07-26 03:57:38.040691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:29:23.366 [2024-07-26 03:57:38.040711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.119 ms 00:29:23.366 [2024-07-26 03:57:38.040726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.366 [2024-07-26 03:57:38.074742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.366 [2024-07-26 03:57:38.074850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:29:23.366 [2024-07-26 03:57:38.074874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.961 ms 00:29:23.366 [2024-07-26 03:57:38.074901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.366 [2024-07-26 03:57:38.108260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.366 [2024-07-26 03:57:38.108372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:23.366 [2024-07-26 03:57:38.108394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.280 ms 00:29:23.366 [2024-07-26 03:57:38.108408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.366 [2024-07-26 03:57:38.143249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.366 [2024-07-26 03:57:38.143309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:23.366 [2024-07-26 03:57:38.143328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.623 ms 00:29:23.366 [2024-07-26 03:57:38.143344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.366 [2024-07-26 03:57:38.143426] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:23.366 [2024-07-26 03:57:38.143454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:23.366 [2024-07-26 03:57:38.143469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:23.366 [2024-07-26 03:57:38.143483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:23.366 [2024-07-26 03:57:38.143496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:23.366 [2024-07-26 03:57:38.143510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:23.366 [2024-07-26 03:57:38.143522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:23.366 [2024-07-26 03:57:38.143537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:23.366 [2024-07-26 03:57:38.143549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:23.366 [2024-07-26 03:57:38.143566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:23.366 [2024-07-26 03:57:38.143579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:23.366 [2024-07-26 03:57:38.143593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:23.366 [2024-07-26 03:57:38.143605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:23.366 [2024-07-26 03:57:38.143619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:23.366 [2024-07-26 03:57:38.143631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:23.366 [2024-07-26 03:57:38.143645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:23.366 [2024-07-26 03:57:38.143657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:23.366 [2024-07-26 03:57:38.143671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:23.366 [2024-07-26 03:57:38.143683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:23.366 [2024-07-26 03:57:38.143698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:23.366 [2024-07-26 03:57:38.143710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:23.366 [2024-07-26 03:57:38.143727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:23.366 [2024-07-26 03:57:38.143740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:23.366 [2024-07-26 03:57:38.143770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:23.366 [2024-07-26 03:57:38.143782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:23.366 [2024-07-26 03:57:38.143799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:23.366 [2024-07-26 03:57:38.143811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:23.366 [2024-07-26 03:57:38.143827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:23.366 [2024-07-26 03:57:38.143894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:23.366 [2024-07-26 03:57:38.143911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:23.366 [2024-07-26 03:57:38.143924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:23.366 [2024-07-26 03:57:38.143938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:23.366 [2024-07-26 03:57:38.143951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:23.366 [2024-07-26 03:57:38.143982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:23.366 [2024-07-26 03:57:38.143995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:23.366 [2024-07-26 03:57:38.144009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:23.366 [2024-07-26 03:57:38.144022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:23.366 [2024-07-26 03:57:38.144036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.144974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:23.367 [2024-07-26 03:57:38.145013] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:23.367 [2024-07-26 03:57:38.145025] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4060d2b9-ed7f-4848-ac63-bd5cd8a344c5 00:29:23.367 [2024-07-26 03:57:38.145044] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:23.367 [2024-07-26 03:57:38.145058] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:23.367 [2024-07-26 03:57:38.145074] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:23.367 [2024-07-26 03:57:38.145086] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:23.367 [2024-07-26 03:57:38.145099] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:23.367 [2024-07-26 03:57:38.145111] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:23.367 [2024-07-26 03:57:38.145124] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:23.367 [2024-07-26 03:57:38.145134] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:23.367 [2024-07-26 03:57:38.145147] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:23.367 [2024-07-26 03:57:38.145159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.367 [2024-07-26 03:57:38.145189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:23.367 [2024-07-26 03:57:38.145202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.735 ms 00:29:23.367 [2024-07-26 03:57:38.145217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.367 [2024-07-26 03:57:38.163441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.367 [2024-07-26 03:57:38.163524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:23.367 [2024-07-26 03:57:38.163545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.152 ms 00:29:23.367 [2024-07-26 03:57:38.163560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.367 [2024-07-26 03:57:38.164128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.367 [2024-07-26 03:57:38.164164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:23.367 [2024-07-26 03:57:38.164181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:29:23.367 [2024-07-26 03:57:38.164195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.367 [2024-07-26 03:57:38.220287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:23.367 [2024-07-26 03:57:38.220396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:23.367 [2024-07-26 03:57:38.220418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:23.367 [2024-07-26 03:57:38.220433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.367 [2024-07-26 03:57:38.220555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:23.367 [2024-07-26 03:57:38.220574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:23.367 [2024-07-26 03:57:38.220587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:23.367 [2024-07-26 03:57:38.220618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.367 [2024-07-26 03:57:38.220779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:23.368 [2024-07-26 03:57:38.220805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:23.368 [2024-07-26 03:57:38.220836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:23.368 [2024-07-26 03:57:38.220865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.368 [2024-07-26 03:57:38.220892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:23.368 [2024-07-26 03:57:38.220927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:23.368 [2024-07-26 03:57:38.220983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:23.368 [2024-07-26 03:57:38.221000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.627 [2024-07-26 03:57:38.328758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:23.627 [2024-07-26 03:57:38.328864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:23.627 [2024-07-26 03:57:38.328889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:23.627 [2024-07-26 03:57:38.328904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.627 [2024-07-26 03:57:38.418627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:23.627 [2024-07-26 03:57:38.418707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:23.627 [2024-07-26 03:57:38.418729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:23.627 [2024-07-26 03:57:38.418745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.627 [2024-07-26 03:57:38.418917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:23.627 [2024-07-26 03:57:38.418959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:23.627 [2024-07-26 03:57:38.418974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:23.627 [2024-07-26 03:57:38.418988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.627 [2024-07-26 03:57:38.419062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:23.627 [2024-07-26 03:57:38.419092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:23.627 [2024-07-26 03:57:38.419105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:23.627 [2024-07-26 03:57:38.419119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.627 [2024-07-26 03:57:38.419244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:23.627 [2024-07-26 03:57:38.419270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:23.627 [2024-07-26 03:57:38.419286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:23.627 [2024-07-26 03:57:38.419300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.627 [2024-07-26 03:57:38.419354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:23.627 [2024-07-26 03:57:38.419377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:23.627 [2024-07-26 03:57:38.419390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:23.627 [2024-07-26 03:57:38.419417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.627 [2024-07-26 03:57:38.419466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:23.627 [2024-07-26 03:57:38.419487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:23.627 [2024-07-26 03:57:38.419502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:23.627 [2024-07-26 03:57:38.419517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.627 [2024-07-26 03:57:38.419575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:23.627 [2024-07-26 03:57:38.419600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:23.627 [2024-07-26 03:57:38.419614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:23.627 [2024-07-26 03:57:38.419628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.627 [2024-07-26 03:57:38.419787] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 497.136 ms, result 0 00:29:23.627 true 00:29:23.627 03:57:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 83868 00:29:23.627 03:57:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid83868 00:29:23.627 03:57:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:29:23.886 [2024-07-26 03:57:38.569168] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:29:23.886 [2024-07-26 03:57:38.569350] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84747 ] 00:29:23.886 [2024-07-26 03:57:38.748921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.144 [2024-07-26 03:57:38.943489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.082  Copying: 157/1024 [MB] (157 MBps) Copying: 324/1024 [MB] (166 MBps) Copying: 492/1024 [MB] (168 MBps) Copying: 657/1024 [MB] (164 MBps) Copying: 822/1024 [MB] (164 MBps) Copying: 990/1024 [MB] (168 MBps) Copying: 1024/1024 [MB] (average 165 MBps) 00:29:32.082 00:29:32.082 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 83868 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:29:32.082 03:57:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:32.082 [2024-07-26 03:57:46.682312] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:29:32.082 [2024-07-26 03:57:46.683112] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84828 ] 00:29:32.082 [2024-07-26 03:57:46.860097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.340 [2024-07-26 03:57:47.047548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.598 [2024-07-26 03:57:47.360629] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:32.598 [2024-07-26 03:57:47.360717] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:32.598 [2024-07-26 03:57:47.427347] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:29:32.598 [2024-07-26 03:57:47.427669] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:29:32.598 [2024-07-26 03:57:47.427922] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:29:32.857 [2024-07-26 03:57:47.667243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.857 [2024-07-26 03:57:47.667318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:32.857 [2024-07-26 03:57:47.667339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:32.857 [2024-07-26 03:57:47.667351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.857 [2024-07-26 03:57:47.667439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.857 [2024-07-26 03:57:47.667462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:32.857 [2024-07-26 03:57:47.667476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:29:32.857 [2024-07-26 03:57:47.667487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.857 [2024-07-26 03:57:47.667520] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:32.857 [2024-07-26 03:57:47.668526] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:32.857 [2024-07-26 03:57:47.668573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.857 [2024-07-26 03:57:47.668589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:32.857 [2024-07-26 03:57:47.668602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.060 ms 00:29:32.857 [2024-07-26 03:57:47.668613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.857 [2024-07-26 03:57:47.669872] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:32.857 [2024-07-26 03:57:47.686227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.857 [2024-07-26 03:57:47.686273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:32.857 [2024-07-26 03:57:47.686299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.356 ms 00:29:32.858 [2024-07-26 03:57:47.686312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.858 [2024-07-26 03:57:47.686387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.858 [2024-07-26 03:57:47.686409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:32.858 [2024-07-26 03:57:47.686422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:29:32.858 [2024-07-26 03:57:47.686434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.858 [2024-07-26 03:57:47.690955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.858 [2024-07-26 03:57:47.691005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:32.858 [2024-07-26 03:57:47.691022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.420 ms 00:29:32.858 [2024-07-26 03:57:47.691034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.858 [2024-07-26 03:57:47.691131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.858 [2024-07-26 03:57:47.691153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:32.858 [2024-07-26 03:57:47.691166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:29:32.858 [2024-07-26 03:57:47.691177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.858 [2024-07-26 03:57:47.691242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.858 [2024-07-26 03:57:47.691260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:32.858 [2024-07-26 03:57:47.691276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:29:32.858 [2024-07-26 03:57:47.691287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.858 [2024-07-26 03:57:47.691323] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:32.858 [2024-07-26 03:57:47.695628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.858 [2024-07-26 03:57:47.695670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:32.858 [2024-07-26 03:57:47.695685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.314 ms 00:29:32.858 [2024-07-26 03:57:47.695697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.858 [2024-07-26 03:57:47.695743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.858 [2024-07-26 03:57:47.695761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:32.858 [2024-07-26 03:57:47.695774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:32.858 [2024-07-26 03:57:47.695785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.858 [2024-07-26 03:57:47.695856] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:32.858 [2024-07-26 03:57:47.695893] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:32.858 [2024-07-26 03:57:47.695943] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:32.858 [2024-07-26 03:57:47.695965] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:29:32.858 [2024-07-26 03:57:47.696079] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:32.858 [2024-07-26 03:57:47.696096] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:32.858 [2024-07-26 03:57:47.696110] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:29:32.858 [2024-07-26 03:57:47.696125] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:32.858 [2024-07-26 03:57:47.696139] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:32.858 [2024-07-26 03:57:47.696156] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:32.858 [2024-07-26 03:57:47.696167] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:32.858 [2024-07-26 03:57:47.696178] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:32.858 [2024-07-26 03:57:47.696188] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:32.858 [2024-07-26 03:57:47.696200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.858 [2024-07-26 03:57:47.696211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:32.858 [2024-07-26 03:57:47.696223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:29:32.858 [2024-07-26 03:57:47.696234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.858 [2024-07-26 03:57:47.696329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.858 [2024-07-26 03:57:47.696346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:32.858 [2024-07-26 03:57:47.696364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:29:32.858 [2024-07-26 03:57:47.696375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.858 [2024-07-26 03:57:47.696507] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:32.858 [2024-07-26 03:57:47.696528] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:32.858 [2024-07-26 03:57:47.696541] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:32.858 [2024-07-26 03:57:47.696552] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:32.858 [2024-07-26 03:57:47.696564] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:32.858 [2024-07-26 03:57:47.696574] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:32.858 [2024-07-26 03:57:47.696584] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:32.858 [2024-07-26 03:57:47.696595] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:32.858 [2024-07-26 03:57:47.696605] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:32.858 [2024-07-26 03:57:47.696615] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:32.858 [2024-07-26 03:57:47.696625] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:32.858 [2024-07-26 03:57:47.696636] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:32.858 [2024-07-26 03:57:47.696646] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:32.858 [2024-07-26 03:57:47.696656] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:32.858 [2024-07-26 03:57:47.696667] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:32.858 [2024-07-26 03:57:47.696677] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:32.858 [2024-07-26 03:57:47.696702] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:32.858 [2024-07-26 03:57:47.696715] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:32.858 [2024-07-26 03:57:47.696726] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:32.858 [2024-07-26 03:57:47.696737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:32.858 [2024-07-26 03:57:47.696748] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:32.858 [2024-07-26 03:57:47.696758] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:32.858 [2024-07-26 03:57:47.696768] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:32.858 [2024-07-26 03:57:47.696778] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:32.858 [2024-07-26 03:57:47.696788] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:32.858 [2024-07-26 03:57:47.696799] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:32.858 [2024-07-26 03:57:47.696809] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:32.858 [2024-07-26 03:57:47.696837] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:32.858 [2024-07-26 03:57:47.696849] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:32.858 [2024-07-26 03:57:47.696859] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:32.858 [2024-07-26 03:57:47.696869] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:32.858 [2024-07-26 03:57:47.696879] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:32.858 [2024-07-26 03:57:47.696890] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:32.858 [2024-07-26 03:57:47.696900] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:32.858 [2024-07-26 03:57:47.696911] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:32.858 [2024-07-26 03:57:47.696921] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:32.858 [2024-07-26 03:57:47.696931] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:32.858 [2024-07-26 03:57:47.696941] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:32.858 [2024-07-26 03:57:47.696952] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:32.858 [2024-07-26 03:57:47.696962] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:32.858 [2024-07-26 03:57:47.696972] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:32.858 [2024-07-26 03:57:47.696982] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:32.858 [2024-07-26 03:57:47.696993] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:32.858 [2024-07-26 03:57:47.697002] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:32.858 [2024-07-26 03:57:47.697014] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:32.858 [2024-07-26 03:57:47.697025] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:32.858 [2024-07-26 03:57:47.697036] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:32.858 [2024-07-26 03:57:47.697052] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:32.858 [2024-07-26 03:57:47.697063] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:32.858 [2024-07-26 03:57:47.697074] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:32.858 [2024-07-26 03:57:47.697085] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:32.858 [2024-07-26 03:57:47.697095] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:32.858 [2024-07-26 03:57:47.697105] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:32.859 [2024-07-26 03:57:47.697117] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:32.859 [2024-07-26 03:57:47.697131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:32.859 [2024-07-26 03:57:47.697143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:32.859 [2024-07-26 03:57:47.697154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:32.859 [2024-07-26 03:57:47.697165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:32.859 [2024-07-26 03:57:47.697176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:32.859 [2024-07-26 03:57:47.697188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:32.859 [2024-07-26 03:57:47.697199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:32.859 [2024-07-26 03:57:47.697210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:32.859 [2024-07-26 03:57:47.697220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:32.859 [2024-07-26 03:57:47.697231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:32.859 [2024-07-26 03:57:47.697243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:32.859 [2024-07-26 03:57:47.697254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:32.859 [2024-07-26 03:57:47.697265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:32.859 [2024-07-26 03:57:47.697276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:32.859 [2024-07-26 03:57:47.697287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:32.859 [2024-07-26 03:57:47.697298] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:32.859 [2024-07-26 03:57:47.697310] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:32.859 [2024-07-26 03:57:47.697322] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:32.859 [2024-07-26 03:57:47.697333] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:32.859 [2024-07-26 03:57:47.697344] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:32.859 [2024-07-26 03:57:47.697355] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:32.859 [2024-07-26 03:57:47.697367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.859 [2024-07-26 03:57:47.697379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:32.859 [2024-07-26 03:57:47.697390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.928 ms 00:29:32.859 [2024-07-26 03:57:47.697401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.859 [2024-07-26 03:57:47.744531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.859 [2024-07-26 03:57:47.744602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:32.859 [2024-07-26 03:57:47.744625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.066 ms 00:29:32.859 [2024-07-26 03:57:47.744638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.859 [2024-07-26 03:57:47.744764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.859 [2024-07-26 03:57:47.744782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:32.859 [2024-07-26 03:57:47.744802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:29:32.859 [2024-07-26 03:57:47.744813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.118 [2024-07-26 03:57:47.783622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.118 [2024-07-26 03:57:47.783689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:33.118 [2024-07-26 03:57:47.783710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.680 ms 00:29:33.118 [2024-07-26 03:57:47.783722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.118 [2024-07-26 03:57:47.783802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.118 [2024-07-26 03:57:47.783843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:33.118 [2024-07-26 03:57:47.783860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:33.118 [2024-07-26 03:57:47.783872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.118 [2024-07-26 03:57:47.784276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.118 [2024-07-26 03:57:47.784303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:33.118 [2024-07-26 03:57:47.784317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:29:33.118 [2024-07-26 03:57:47.784329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.118 [2024-07-26 03:57:47.784488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.118 [2024-07-26 03:57:47.784509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:33.118 [2024-07-26 03:57:47.784521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:29:33.118 [2024-07-26 03:57:47.784532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.118 [2024-07-26 03:57:47.800801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.118 [2024-07-26 03:57:47.800862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:33.118 [2024-07-26 03:57:47.800882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.240 ms 00:29:33.118 [2024-07-26 03:57:47.800894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.118 [2024-07-26 03:57:47.817480] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:33.118 [2024-07-26 03:57:47.817530] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:33.118 [2024-07-26 03:57:47.817550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.118 [2024-07-26 03:57:47.817563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:33.118 [2024-07-26 03:57:47.817577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.484 ms 00:29:33.118 [2024-07-26 03:57:47.817588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.118 [2024-07-26 03:57:47.847773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.118 [2024-07-26 03:57:47.847842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:33.118 [2024-07-26 03:57:47.847863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.132 ms 00:29:33.118 [2024-07-26 03:57:47.847875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.118 [2024-07-26 03:57:47.864289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.118 [2024-07-26 03:57:47.864334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:33.118 [2024-07-26 03:57:47.864352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.341 ms 00:29:33.118 [2024-07-26 03:57:47.864364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.118 [2024-07-26 03:57:47.880351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.118 [2024-07-26 03:57:47.880396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:33.118 [2024-07-26 03:57:47.880414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.939 ms 00:29:33.118 [2024-07-26 03:57:47.880425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.118 [2024-07-26 03:57:47.881309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.118 [2024-07-26 03:57:47.881348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:33.118 [2024-07-26 03:57:47.881364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.763 ms 00:29:33.118 [2024-07-26 03:57:47.881375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.118 [2024-07-26 03:57:47.954953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.118 [2024-07-26 03:57:47.955024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:33.118 [2024-07-26 03:57:47.955046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.549 ms 00:29:33.118 [2024-07-26 03:57:47.955058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.118 [2024-07-26 03:57:47.967874] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:33.118 [2024-07-26 03:57:47.970557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.119 [2024-07-26 03:57:47.970588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:33.119 [2024-07-26 03:57:47.970605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.423 ms 00:29:33.119 [2024-07-26 03:57:47.970618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.119 [2024-07-26 03:57:47.970747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.119 [2024-07-26 03:57:47.970771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:33.119 [2024-07-26 03:57:47.970786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:33.119 [2024-07-26 03:57:47.970798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.119 [2024-07-26 03:57:47.970926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.119 [2024-07-26 03:57:47.970949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:33.119 [2024-07-26 03:57:47.970963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:29:33.119 [2024-07-26 03:57:47.970974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.119 [2024-07-26 03:57:47.971008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.119 [2024-07-26 03:57:47.971025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:33.119 [2024-07-26 03:57:47.971044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:33.119 [2024-07-26 03:57:47.971055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.119 [2024-07-26 03:57:47.971097] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:33.119 [2024-07-26 03:57:47.971116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.119 [2024-07-26 03:57:47.971127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:33.119 [2024-07-26 03:57:47.971140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:29:33.119 [2024-07-26 03:57:47.971150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.119 [2024-07-26 03:57:48.002899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.119 [2024-07-26 03:57:48.002990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:33.119 [2024-07-26 03:57:48.003013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.713 ms 00:29:33.119 [2024-07-26 03:57:48.003025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.119 [2024-07-26 03:57:48.003145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.119 [2024-07-26 03:57:48.003166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:33.119 [2024-07-26 03:57:48.003180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:29:33.119 [2024-07-26 03:57:48.003191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.119 [2024-07-26 03:57:48.004526] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 336.755 ms, result 0 00:30:11.125  Copying: 29/1024 [MB] (29 MBps) Copying: 57/1024 [MB] (27 MBps) Copying: 85/1024 [MB] (28 MBps) Copying: 115/1024 [MB] (29 MBps) Copying: 142/1024 [MB] (27 MBps) Copying: 161/1024 [MB] (18 MBps) Copying: 190/1024 [MB] (28 MBps) Copying: 218/1024 [MB] (28 MBps) Copying: 244/1024 [MB] (26 MBps) Copying: 273/1024 [MB] (28 MBps) Copying: 303/1024 [MB] (29 MBps) Copying: 333/1024 [MB] (29 MBps) Copying: 361/1024 [MB] (28 MBps) Copying: 389/1024 [MB] (28 MBps) Copying: 418/1024 [MB] (28 MBps) Copying: 446/1024 [MB] (28 MBps) Copying: 472/1024 [MB] (26 MBps) Copying: 501/1024 [MB] (28 MBps) Copying: 529/1024 [MB] (28 MBps) Copying: 558/1024 [MB] (28 MBps) Copying: 587/1024 [MB] (29 MBps) Copying: 614/1024 [MB] (27 MBps) Copying: 643/1024 [MB] (28 MBps) Copying: 672/1024 [MB] (29 MBps) Copying: 701/1024 [MB] (28 MBps) Copying: 729/1024 [MB] (28 MBps) Copying: 757/1024 [MB] (28 MBps) Copying: 786/1024 [MB] (28 MBps) Copying: 814/1024 [MB] (27 MBps) Copying: 841/1024 [MB] (27 MBps) Copying: 870/1024 [MB] (28 MBps) Copying: 898/1024 [MB] (27 MBps) Copying: 926/1024 [MB] (28 MBps) Copying: 955/1024 [MB] (28 MBps) Copying: 984/1024 [MB] (28 MBps) Copying: 1012/1024 [MB] (28 MBps) Copying: 1023/1024 [MB] (11 MBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-07-26 03:58:25.732141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.125 [2024-07-26 03:58:25.732220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:11.125 [2024-07-26 03:58:25.732244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:11.125 [2024-07-26 03:58:25.732257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.125 [2024-07-26 03:58:25.733306] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:11.125 [2024-07-26 03:58:25.739027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.125 [2024-07-26 03:58:25.739074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:11.125 [2024-07-26 03:58:25.739093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.674 ms 00:30:11.125 [2024-07-26 03:58:25.739105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.125 [2024-07-26 03:58:25.752734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.125 [2024-07-26 03:58:25.752812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:11.125 [2024-07-26 03:58:25.752854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.070 ms 00:30:11.125 [2024-07-26 03:58:25.752867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.125 [2024-07-26 03:58:25.774551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.125 [2024-07-26 03:58:25.774632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:11.125 [2024-07-26 03:58:25.774659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.655 ms 00:30:11.125 [2024-07-26 03:58:25.774677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.125 [2024-07-26 03:58:25.781671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.125 [2024-07-26 03:58:25.781709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:11.125 [2024-07-26 03:58:25.781736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.949 ms 00:30:11.125 [2024-07-26 03:58:25.781748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.125 [2024-07-26 03:58:25.814442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.125 [2024-07-26 03:58:25.814568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:11.125 [2024-07-26 03:58:25.814591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.585 ms 00:30:11.125 [2024-07-26 03:58:25.814603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.125 [2024-07-26 03:58:25.833356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.125 [2024-07-26 03:58:25.833434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:11.125 [2024-07-26 03:58:25.833455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.668 ms 00:30:11.125 [2024-07-26 03:58:25.833468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.125 [2024-07-26 03:58:25.925336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.125 [2024-07-26 03:58:25.925405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:11.125 [2024-07-26 03:58:25.925427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.790 ms 00:30:11.125 [2024-07-26 03:58:25.925448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.125 [2024-07-26 03:58:25.958767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.125 [2024-07-26 03:58:25.958853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:30:11.125 [2024-07-26 03:58:25.958876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.291 ms 00:30:11.125 [2024-07-26 03:58:25.958888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.125 [2024-07-26 03:58:25.992546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.125 [2024-07-26 03:58:25.992615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:30:11.125 [2024-07-26 03:58:25.992635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.587 ms 00:30:11.125 [2024-07-26 03:58:25.992647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.125 [2024-07-26 03:58:26.024332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.125 [2024-07-26 03:58:26.024406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:11.125 [2024-07-26 03:58:26.024428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.621 ms 00:30:11.125 [2024-07-26 03:58:26.024440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.385 [2024-07-26 03:58:26.055841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.385 [2024-07-26 03:58:26.055914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:11.385 [2024-07-26 03:58:26.055935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.258 ms 00:30:11.385 [2024-07-26 03:58:26.055946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.385 [2024-07-26 03:58:26.056016] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:11.385 [2024-07-26 03:58:26.056041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 130048 / 261120 wr_cnt: 1 state: open 00:30:11.385 [2024-07-26 03:58:26.056056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.056998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.057009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.057020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.057032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.057044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.057055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.057067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.057078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.057090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.057102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.057114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.057125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.057137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.057148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.057159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.057171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.057182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.057198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.057210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.057221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.057233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.057244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:11.385 [2024-07-26 03:58:26.057265] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:11.385 [2024-07-26 03:58:26.057277] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4060d2b9-ed7f-4848-ac63-bd5cd8a344c5 00:30:11.385 [2024-07-26 03:58:26.057296] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 130048 00:30:11.385 [2024-07-26 03:58:26.057307] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 131008 00:30:11.386 [2024-07-26 03:58:26.057321] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 130048 00:30:11.386 [2024-07-26 03:58:26.057333] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:30:11.386 [2024-07-26 03:58:26.057343] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:11.386 [2024-07-26 03:58:26.057355] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:11.386 [2024-07-26 03:58:26.057366] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:11.386 [2024-07-26 03:58:26.057376] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:11.386 [2024-07-26 03:58:26.057386] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:11.386 [2024-07-26 03:58:26.057397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.386 [2024-07-26 03:58:26.057408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:11.386 [2024-07-26 03:58:26.057434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.383 ms 00:30:11.386 [2024-07-26 03:58:26.057446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.386 [2024-07-26 03:58:26.074173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.386 [2024-07-26 03:58:26.074241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:11.386 [2024-07-26 03:58:26.074260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.638 ms 00:30:11.386 [2024-07-26 03:58:26.074272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.386 [2024-07-26 03:58:26.074718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.386 [2024-07-26 03:58:26.074753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:11.386 [2024-07-26 03:58:26.074768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:30:11.386 [2024-07-26 03:58:26.074780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.386 [2024-07-26 03:58:26.112171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:11.386 [2024-07-26 03:58:26.112260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:11.386 [2024-07-26 03:58:26.112280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:11.386 [2024-07-26 03:58:26.112293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.386 [2024-07-26 03:58:26.112384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:11.386 [2024-07-26 03:58:26.112402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:11.386 [2024-07-26 03:58:26.112414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:11.386 [2024-07-26 03:58:26.112425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.386 [2024-07-26 03:58:26.112529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:11.386 [2024-07-26 03:58:26.112549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:11.386 [2024-07-26 03:58:26.112561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:11.386 [2024-07-26 03:58:26.112573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.386 [2024-07-26 03:58:26.112596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:11.386 [2024-07-26 03:58:26.112611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:11.386 [2024-07-26 03:58:26.112623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:11.386 [2024-07-26 03:58:26.112634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.386 [2024-07-26 03:58:26.211641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:11.386 [2024-07-26 03:58:26.211722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:11.386 [2024-07-26 03:58:26.211743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:11.386 [2024-07-26 03:58:26.211756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.645 [2024-07-26 03:58:26.297185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:11.645 [2024-07-26 03:58:26.297268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:11.645 [2024-07-26 03:58:26.297290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:11.645 [2024-07-26 03:58:26.297302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.645 [2024-07-26 03:58:26.297410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:11.645 [2024-07-26 03:58:26.297454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:11.645 [2024-07-26 03:58:26.297466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:11.645 [2024-07-26 03:58:26.297477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.645 [2024-07-26 03:58:26.297528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:11.645 [2024-07-26 03:58:26.297544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:11.645 [2024-07-26 03:58:26.297556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:11.645 [2024-07-26 03:58:26.297567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.645 [2024-07-26 03:58:26.297689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:11.645 [2024-07-26 03:58:26.297714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:11.645 [2024-07-26 03:58:26.297727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:11.645 [2024-07-26 03:58:26.297738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.645 [2024-07-26 03:58:26.297789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:11.645 [2024-07-26 03:58:26.297808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:11.645 [2024-07-26 03:58:26.297852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:11.645 [2024-07-26 03:58:26.297867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.645 [2024-07-26 03:58:26.297914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:11.645 [2024-07-26 03:58:26.297931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:11.645 [2024-07-26 03:58:26.297949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:11.645 [2024-07-26 03:58:26.297961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.645 [2024-07-26 03:58:26.298013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:11.645 [2024-07-26 03:58:26.298030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:11.645 [2024-07-26 03:58:26.298042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:11.645 [2024-07-26 03:58:26.298053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.645 [2024-07-26 03:58:26.298208] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 569.172 ms, result 0 00:30:13.051 00:30:13.051 00:30:13.051 03:58:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:30:15.584 03:58:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:15.584 [2024-07-26 03:58:30.076149] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:30:15.584 [2024-07-26 03:58:30.076295] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85255 ] 00:30:15.584 [2024-07-26 03:58:30.240252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:15.584 [2024-07-26 03:58:30.470859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:16.151 [2024-07-26 03:58:30.780062] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:16.151 [2024-07-26 03:58:30.780146] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:16.151 [2024-07-26 03:58:30.940313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.151 [2024-07-26 03:58:30.940378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:16.151 [2024-07-26 03:58:30.940399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:16.151 [2024-07-26 03:58:30.940410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.151 [2024-07-26 03:58:30.940479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.151 [2024-07-26 03:58:30.940499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:16.152 [2024-07-26 03:58:30.940512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:30:16.152 [2024-07-26 03:58:30.940527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.152 [2024-07-26 03:58:30.940563] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:16.152 [2024-07-26 03:58:30.941504] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:16.152 [2024-07-26 03:58:30.941550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.152 [2024-07-26 03:58:30.941564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:16.152 [2024-07-26 03:58:30.941577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.998 ms 00:30:16.152 [2024-07-26 03:58:30.941588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.152 [2024-07-26 03:58:30.942682] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:16.152 [2024-07-26 03:58:30.958969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.152 [2024-07-26 03:58:30.959017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:16.152 [2024-07-26 03:58:30.959036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.288 ms 00:30:16.152 [2024-07-26 03:58:30.959048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.152 [2024-07-26 03:58:30.959127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.152 [2024-07-26 03:58:30.959151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:16.152 [2024-07-26 03:58:30.959165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:30:16.152 [2024-07-26 03:58:30.959176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.152 [2024-07-26 03:58:30.963424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.152 [2024-07-26 03:58:30.963472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:16.152 [2024-07-26 03:58:30.963489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.138 ms 00:30:16.152 [2024-07-26 03:58:30.963500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.152 [2024-07-26 03:58:30.963605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.152 [2024-07-26 03:58:30.963625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:16.152 [2024-07-26 03:58:30.963638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:30:16.152 [2024-07-26 03:58:30.963650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.152 [2024-07-26 03:58:30.963724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.152 [2024-07-26 03:58:30.963743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:16.152 [2024-07-26 03:58:30.963756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:30:16.152 [2024-07-26 03:58:30.963767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.152 [2024-07-26 03:58:30.963802] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:16.152 [2024-07-26 03:58:30.968069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.152 [2024-07-26 03:58:30.968110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:16.152 [2024-07-26 03:58:30.968127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.277 ms 00:30:16.152 [2024-07-26 03:58:30.968138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.152 [2024-07-26 03:58:30.968185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.152 [2024-07-26 03:58:30.968201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:16.152 [2024-07-26 03:58:30.968214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:16.152 [2024-07-26 03:58:30.968224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.152 [2024-07-26 03:58:30.968272] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:16.152 [2024-07-26 03:58:30.968305] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:16.152 [2024-07-26 03:58:30.968349] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:16.152 [2024-07-26 03:58:30.968373] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:30:16.152 [2024-07-26 03:58:30.968479] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:16.152 [2024-07-26 03:58:30.968494] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:16.152 [2024-07-26 03:58:30.968509] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:30:16.152 [2024-07-26 03:58:30.968524] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:16.152 [2024-07-26 03:58:30.968537] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:16.152 [2024-07-26 03:58:30.968549] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:16.152 [2024-07-26 03:58:30.968561] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:16.152 [2024-07-26 03:58:30.968572] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:16.152 [2024-07-26 03:58:30.968583] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:16.152 [2024-07-26 03:58:30.968594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.152 [2024-07-26 03:58:30.968609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:16.152 [2024-07-26 03:58:30.968622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:30:16.152 [2024-07-26 03:58:30.968633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.152 [2024-07-26 03:58:30.968721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.152 [2024-07-26 03:58:30.968736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:16.152 [2024-07-26 03:58:30.968748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:30:16.152 [2024-07-26 03:58:30.968759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.152 [2024-07-26 03:58:30.968892] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:16.152 [2024-07-26 03:58:30.968913] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:16.152 [2024-07-26 03:58:30.968932] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:16.152 [2024-07-26 03:58:30.968944] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:16.152 [2024-07-26 03:58:30.968955] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:16.152 [2024-07-26 03:58:30.968966] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:16.152 [2024-07-26 03:58:30.968976] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:16.152 [2024-07-26 03:58:30.968987] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:16.152 [2024-07-26 03:58:30.968997] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:16.152 [2024-07-26 03:58:30.969007] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:16.152 [2024-07-26 03:58:30.969017] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:16.152 [2024-07-26 03:58:30.969027] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:16.152 [2024-07-26 03:58:30.969037] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:16.152 [2024-07-26 03:58:30.969048] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:16.152 [2024-07-26 03:58:30.969059] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:16.152 [2024-07-26 03:58:30.969069] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:16.152 [2024-07-26 03:58:30.969080] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:16.152 [2024-07-26 03:58:30.969090] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:16.152 [2024-07-26 03:58:30.969100] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:16.152 [2024-07-26 03:58:30.969110] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:16.152 [2024-07-26 03:58:30.969132] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:16.152 [2024-07-26 03:58:30.969143] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:16.152 [2024-07-26 03:58:30.969153] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:16.152 [2024-07-26 03:58:30.969163] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:16.152 [2024-07-26 03:58:30.969173] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:16.152 [2024-07-26 03:58:30.969183] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:16.152 [2024-07-26 03:58:30.969192] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:16.152 [2024-07-26 03:58:30.969203] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:16.152 [2024-07-26 03:58:30.969213] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:16.152 [2024-07-26 03:58:30.969223] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:16.152 [2024-07-26 03:58:30.969233] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:16.152 [2024-07-26 03:58:30.969242] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:16.152 [2024-07-26 03:58:30.969253] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:16.152 [2024-07-26 03:58:30.969264] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:16.152 [2024-07-26 03:58:30.969274] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:16.152 [2024-07-26 03:58:30.969284] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:16.152 [2024-07-26 03:58:30.969293] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:16.152 [2024-07-26 03:58:30.969304] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:16.152 [2024-07-26 03:58:30.969314] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:16.152 [2024-07-26 03:58:30.969323] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:16.152 [2024-07-26 03:58:30.969334] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:16.152 [2024-07-26 03:58:30.969344] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:16.152 [2024-07-26 03:58:30.969355] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:16.152 [2024-07-26 03:58:30.969365] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:16.153 [2024-07-26 03:58:30.969388] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:16.153 [2024-07-26 03:58:30.969402] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:16.153 [2024-07-26 03:58:30.969413] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:16.153 [2024-07-26 03:58:30.969425] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:16.153 [2024-07-26 03:58:30.969436] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:16.153 [2024-07-26 03:58:30.969446] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:16.153 [2024-07-26 03:58:30.969456] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:16.153 [2024-07-26 03:58:30.969466] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:16.153 [2024-07-26 03:58:30.969477] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:16.153 [2024-07-26 03:58:30.969489] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:16.153 [2024-07-26 03:58:30.969503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:16.153 [2024-07-26 03:58:30.969516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:16.153 [2024-07-26 03:58:30.969527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:16.153 [2024-07-26 03:58:30.969538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:16.153 [2024-07-26 03:58:30.969550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:16.153 [2024-07-26 03:58:30.969561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:16.153 [2024-07-26 03:58:30.969572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:16.153 [2024-07-26 03:58:30.969583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:16.153 [2024-07-26 03:58:30.969595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:16.153 [2024-07-26 03:58:30.969606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:16.153 [2024-07-26 03:58:30.969617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:16.153 [2024-07-26 03:58:30.969628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:16.153 [2024-07-26 03:58:30.969639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:16.153 [2024-07-26 03:58:30.969650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:16.153 [2024-07-26 03:58:30.969662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:16.153 [2024-07-26 03:58:30.969673] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:16.153 [2024-07-26 03:58:30.969685] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:16.153 [2024-07-26 03:58:30.969703] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:16.153 [2024-07-26 03:58:30.969714] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:16.153 [2024-07-26 03:58:30.969725] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:16.153 [2024-07-26 03:58:30.969737] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:16.153 [2024-07-26 03:58:30.969749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.153 [2024-07-26 03:58:30.969761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:16.153 [2024-07-26 03:58:30.969773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.949 ms 00:30:16.153 [2024-07-26 03:58:30.969784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.153 [2024-07-26 03:58:31.014319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.153 [2024-07-26 03:58:31.014380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:16.153 [2024-07-26 03:58:31.014402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.426 ms 00:30:16.153 [2024-07-26 03:58:31.014414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.153 [2024-07-26 03:58:31.014536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.153 [2024-07-26 03:58:31.014553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:16.153 [2024-07-26 03:58:31.014566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:30:16.153 [2024-07-26 03:58:31.014578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.153 [2024-07-26 03:58:31.053332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.153 [2024-07-26 03:58:31.053403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:16.153 [2024-07-26 03:58:31.053424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.656 ms 00:30:16.153 [2024-07-26 03:58:31.053436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.153 [2024-07-26 03:58:31.053512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.153 [2024-07-26 03:58:31.053530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:16.153 [2024-07-26 03:58:31.053542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:16.153 [2024-07-26 03:58:31.053560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.153 [2024-07-26 03:58:31.053984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.153 [2024-07-26 03:58:31.054004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:16.153 [2024-07-26 03:58:31.054018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:30:16.153 [2024-07-26 03:58:31.054029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.153 [2024-07-26 03:58:31.054206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.153 [2024-07-26 03:58:31.054253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:16.153 [2024-07-26 03:58:31.054267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:30:16.153 [2024-07-26 03:58:31.054278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.413 [2024-07-26 03:58:31.070764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.413 [2024-07-26 03:58:31.070842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:16.413 [2024-07-26 03:58:31.070865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.437 ms 00:30:16.413 [2024-07-26 03:58:31.070883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.413 [2024-07-26 03:58:31.087212] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:30:16.413 [2024-07-26 03:58:31.087260] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:16.413 [2024-07-26 03:58:31.087279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.413 [2024-07-26 03:58:31.087291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:16.413 [2024-07-26 03:58:31.087304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.197 ms 00:30:16.413 [2024-07-26 03:58:31.087315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.413 [2024-07-26 03:58:31.117007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.413 [2024-07-26 03:58:31.117080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:16.413 [2024-07-26 03:58:31.117100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.642 ms 00:30:16.413 [2024-07-26 03:58:31.117113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.413 [2024-07-26 03:58:31.132907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.413 [2024-07-26 03:58:31.132951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:16.413 [2024-07-26 03:58:31.132969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.739 ms 00:30:16.413 [2024-07-26 03:58:31.132980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.413 [2024-07-26 03:58:31.148350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.413 [2024-07-26 03:58:31.148392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:16.413 [2024-07-26 03:58:31.148409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.325 ms 00:30:16.413 [2024-07-26 03:58:31.148420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.413 [2024-07-26 03:58:31.149339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.413 [2024-07-26 03:58:31.149379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:16.413 [2024-07-26 03:58:31.149395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.772 ms 00:30:16.413 [2024-07-26 03:58:31.149407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.413 [2024-07-26 03:58:31.223477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.413 [2024-07-26 03:58:31.223551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:16.413 [2024-07-26 03:58:31.223572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.037 ms 00:30:16.413 [2024-07-26 03:58:31.223592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.413 [2024-07-26 03:58:31.236312] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:16.413 [2024-07-26 03:58:31.239015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.413 [2024-07-26 03:58:31.239055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:16.413 [2024-07-26 03:58:31.239095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.347 ms 00:30:16.413 [2024-07-26 03:58:31.239115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.413 [2024-07-26 03:58:31.239259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.413 [2024-07-26 03:58:31.239289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:16.413 [2024-07-26 03:58:31.239307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:16.413 [2024-07-26 03:58:31.239319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.414 [2024-07-26 03:58:31.240997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.414 [2024-07-26 03:58:31.241038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:16.414 [2024-07-26 03:58:31.241054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.612 ms 00:30:16.414 [2024-07-26 03:58:31.241065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.414 [2024-07-26 03:58:31.241104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.414 [2024-07-26 03:58:31.241120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:16.414 [2024-07-26 03:58:31.241133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:16.414 [2024-07-26 03:58:31.241144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.414 [2024-07-26 03:58:31.241199] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:16.414 [2024-07-26 03:58:31.241234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.414 [2024-07-26 03:58:31.241253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:16.414 [2024-07-26 03:58:31.241266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:30:16.414 [2024-07-26 03:58:31.241277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.414 [2024-07-26 03:58:31.272514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.414 [2024-07-26 03:58:31.272565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:16.414 [2024-07-26 03:58:31.272585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.200 ms 00:30:16.414 [2024-07-26 03:58:31.272612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.414 [2024-07-26 03:58:31.272701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.414 [2024-07-26 03:58:31.272720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:16.414 [2024-07-26 03:58:31.272732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:30:16.414 [2024-07-26 03:58:31.272743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.414 [2024-07-26 03:58:31.280279] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 338.372 ms, result 0 00:30:53.122  Copying: 844/1048576 [kB] (844 kBps) Copying: 4236/1048576 [kB] (3392 kBps) Copying: 24/1024 [MB] (20 MBps) Copying: 55/1024 [MB] (30 MBps) Copying: 86/1024 [MB] (31 MBps) Copying: 117/1024 [MB] (30 MBps) Copying: 148/1024 [MB] (30 MBps) Copying: 179/1024 [MB] (31 MBps) Copying: 209/1024 [MB] (30 MBps) Copying: 240/1024 [MB] (30 MBps) Copying: 270/1024 [MB] (30 MBps) Copying: 299/1024 [MB] (29 MBps) Copying: 327/1024 [MB] (27 MBps) Copying: 356/1024 [MB] (29 MBps) Copying: 387/1024 [MB] (30 MBps) Copying: 417/1024 [MB] (29 MBps) Copying: 445/1024 [MB] (28 MBps) Copying: 477/1024 [MB] (31 MBps) Copying: 508/1024 [MB] (31 MBps) Copying: 540/1024 [MB] (31 MBps) Copying: 571/1024 [MB] (30 MBps) Copying: 601/1024 [MB] (30 MBps) Copying: 632/1024 [MB] (30 MBps) Copying: 663/1024 [MB] (30 MBps) Copying: 694/1024 [MB] (30 MBps) Copying: 724/1024 [MB] (30 MBps) Copying: 755/1024 [MB] (30 MBps) Copying: 786/1024 [MB] (31 MBps) Copying: 817/1024 [MB] (31 MBps) Copying: 848/1024 [MB] (30 MBps) Copying: 878/1024 [MB] (30 MBps) Copying: 905/1024 [MB] (27 MBps) Copying: 935/1024 [MB] (29 MBps) Copying: 965/1024 [MB] (30 MBps) Copying: 994/1024 [MB] (29 MBps) Copying: 1022/1024 [MB] (27 MBps) Copying: 1024/1024 [MB] (average 28 MBps)[2024-07-26 03:59:07.886295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.122 [2024-07-26 03:59:07.886374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:53.122 [2024-07-26 03:59:07.886406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:53.122 [2024-07-26 03:59:07.886427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.122 [2024-07-26 03:59:07.886559] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:53.122 [2024-07-26 03:59:07.890172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.122 [2024-07-26 03:59:07.890326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:53.122 [2024-07-26 03:59:07.890451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.574 ms 00:30:53.122 [2024-07-26 03:59:07.890584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.122 [2024-07-26 03:59:07.890859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.122 [2024-07-26 03:59:07.890889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:53.122 [2024-07-26 03:59:07.890913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:30:53.122 [2024-07-26 03:59:07.890925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.122 [2024-07-26 03:59:07.902132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.122 [2024-07-26 03:59:07.902200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:53.122 [2024-07-26 03:59:07.902220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.183 ms 00:30:53.122 [2024-07-26 03:59:07.902232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.122 [2024-07-26 03:59:07.909708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.122 [2024-07-26 03:59:07.909759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:53.122 [2024-07-26 03:59:07.909776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.433 ms 00:30:53.122 [2024-07-26 03:59:07.909799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.122 [2024-07-26 03:59:07.943450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.122 [2024-07-26 03:59:07.943516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:53.122 [2024-07-26 03:59:07.943538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.552 ms 00:30:53.122 [2024-07-26 03:59:07.943549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.122 [2024-07-26 03:59:07.961347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.122 [2024-07-26 03:59:07.961396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:53.122 [2024-07-26 03:59:07.961415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.738 ms 00:30:53.122 [2024-07-26 03:59:07.961427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.122 [2024-07-26 03:59:07.964813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.122 [2024-07-26 03:59:07.964869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:53.122 [2024-07-26 03:59:07.964885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.336 ms 00:30:53.122 [2024-07-26 03:59:07.964897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.122 [2024-07-26 03:59:07.996338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.122 [2024-07-26 03:59:07.996399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:30:53.122 [2024-07-26 03:59:07.996418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.416 ms 00:30:53.122 [2024-07-26 03:59:07.996430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.383 [2024-07-26 03:59:08.028330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.383 [2024-07-26 03:59:08.028405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:30:53.383 [2024-07-26 03:59:08.028426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.845 ms 00:30:53.383 [2024-07-26 03:59:08.028437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.383 [2024-07-26 03:59:08.060097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.383 [2024-07-26 03:59:08.060167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:53.383 [2024-07-26 03:59:08.060188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.599 ms 00:30:53.383 [2024-07-26 03:59:08.060220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.383 [2024-07-26 03:59:08.091165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.383 [2024-07-26 03:59:08.091252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:53.383 [2024-07-26 03:59:08.091272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.834 ms 00:30:53.383 [2024-07-26 03:59:08.091284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.383 [2024-07-26 03:59:08.091365] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:53.383 [2024-07-26 03:59:08.091390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:53.383 [2024-07-26 03:59:08.091404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3328 / 261120 wr_cnt: 1 state: open 00:30:53.383 [2024-07-26 03:59:08.091417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.091998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.092009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.092021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.092033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.092044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.092056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.092069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.092080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.092092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.092103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.092115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:53.383 [2024-07-26 03:59:08.092127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:53.384 [2024-07-26 03:59:08.092604] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:53.384 [2024-07-26 03:59:08.092616] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4060d2b9-ed7f-4848-ac63-bd5cd8a344c5 00:30:53.384 [2024-07-26 03:59:08.092627] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264448 00:30:53.384 [2024-07-26 03:59:08.092644] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 136384 00:30:53.384 [2024-07-26 03:59:08.092655] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 134400 00:30:53.384 [2024-07-26 03:59:08.092667] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0148 00:30:53.384 [2024-07-26 03:59:08.092682] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:53.384 [2024-07-26 03:59:08.092694] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:53.384 [2024-07-26 03:59:08.092706] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:53.384 [2024-07-26 03:59:08.092716] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:53.384 [2024-07-26 03:59:08.092726] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:53.384 [2024-07-26 03:59:08.092738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.384 [2024-07-26 03:59:08.092749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:53.384 [2024-07-26 03:59:08.092761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.376 ms 00:30:53.384 [2024-07-26 03:59:08.092773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.384 [2024-07-26 03:59:08.109807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.384 [2024-07-26 03:59:08.109885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:53.384 [2024-07-26 03:59:08.109913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.953 ms 00:30:53.384 [2024-07-26 03:59:08.109939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.384 [2024-07-26 03:59:08.110392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.384 [2024-07-26 03:59:08.110415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:53.384 [2024-07-26 03:59:08.110430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:30:53.384 [2024-07-26 03:59:08.110441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.384 [2024-07-26 03:59:08.147368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.384 [2024-07-26 03:59:08.147436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:53.384 [2024-07-26 03:59:08.147455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.384 [2024-07-26 03:59:08.147466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.384 [2024-07-26 03:59:08.147548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.384 [2024-07-26 03:59:08.147563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:53.384 [2024-07-26 03:59:08.147575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.384 [2024-07-26 03:59:08.147586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.384 [2024-07-26 03:59:08.147682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.384 [2024-07-26 03:59:08.147707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:53.384 [2024-07-26 03:59:08.147719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.384 [2024-07-26 03:59:08.147730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.384 [2024-07-26 03:59:08.147753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.384 [2024-07-26 03:59:08.147766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:53.384 [2024-07-26 03:59:08.147778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.384 [2024-07-26 03:59:08.147789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.384 [2024-07-26 03:59:08.247269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.384 [2024-07-26 03:59:08.247349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:53.384 [2024-07-26 03:59:08.247369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.384 [2024-07-26 03:59:08.247381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.644 [2024-07-26 03:59:08.331777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.644 [2024-07-26 03:59:08.331869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:53.644 [2024-07-26 03:59:08.331889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.644 [2024-07-26 03:59:08.331901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.644 [2024-07-26 03:59:08.332003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.644 [2024-07-26 03:59:08.332021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:53.644 [2024-07-26 03:59:08.332041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.644 [2024-07-26 03:59:08.332053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.644 [2024-07-26 03:59:08.332100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.644 [2024-07-26 03:59:08.332115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:53.644 [2024-07-26 03:59:08.332127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.644 [2024-07-26 03:59:08.332138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.644 [2024-07-26 03:59:08.332256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.644 [2024-07-26 03:59:08.332275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:53.644 [2024-07-26 03:59:08.332287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.644 [2024-07-26 03:59:08.332304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.644 [2024-07-26 03:59:08.332355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.644 [2024-07-26 03:59:08.332373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:53.644 [2024-07-26 03:59:08.332384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.644 [2024-07-26 03:59:08.332395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.644 [2024-07-26 03:59:08.332437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.644 [2024-07-26 03:59:08.332451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:53.644 [2024-07-26 03:59:08.332463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.644 [2024-07-26 03:59:08.332474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.644 [2024-07-26 03:59:08.332532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.644 [2024-07-26 03:59:08.332548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:53.644 [2024-07-26 03:59:08.332560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.644 [2024-07-26 03:59:08.332571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.644 [2024-07-26 03:59:08.332709] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 446.382 ms, result 0 00:30:54.579 00:30:54.579 00:30:54.579 03:59:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:57.168 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:57.168 03:59:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:57.168 [2024-07-26 03:59:11.717895] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:30:57.168 [2024-07-26 03:59:11.718695] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85655 ] 00:30:57.168 [2024-07-26 03:59:11.885581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:57.426 [2024-07-26 03:59:12.112661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:57.685 [2024-07-26 03:59:12.435760] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:57.685 [2024-07-26 03:59:12.436056] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:57.946 [2024-07-26 03:59:12.596126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.946 [2024-07-26 03:59:12.596408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:57.946 [2024-07-26 03:59:12.596443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:57.946 [2024-07-26 03:59:12.596457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.946 [2024-07-26 03:59:12.596549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.946 [2024-07-26 03:59:12.596569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:57.946 [2024-07-26 03:59:12.596583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:30:57.946 [2024-07-26 03:59:12.596598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.946 [2024-07-26 03:59:12.596636] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:57.946 [2024-07-26 03:59:12.597617] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:57.946 [2024-07-26 03:59:12.597663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.946 [2024-07-26 03:59:12.597679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:57.946 [2024-07-26 03:59:12.597692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.038 ms 00:30:57.946 [2024-07-26 03:59:12.597704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.946 [2024-07-26 03:59:12.598843] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:57.946 [2024-07-26 03:59:12.615130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.946 [2024-07-26 03:59:12.615185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:57.946 [2024-07-26 03:59:12.615206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.287 ms 00:30:57.946 [2024-07-26 03:59:12.615218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.946 [2024-07-26 03:59:12.615304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.946 [2024-07-26 03:59:12.615327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:57.946 [2024-07-26 03:59:12.615341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:30:57.946 [2024-07-26 03:59:12.615353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.946 [2024-07-26 03:59:12.619673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.946 [2024-07-26 03:59:12.619731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:57.946 [2024-07-26 03:59:12.619750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.214 ms 00:30:57.946 [2024-07-26 03:59:12.619762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.946 [2024-07-26 03:59:12.619894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.946 [2024-07-26 03:59:12.619916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:57.946 [2024-07-26 03:59:12.619930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:30:57.946 [2024-07-26 03:59:12.619942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.946 [2024-07-26 03:59:12.620016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.946 [2024-07-26 03:59:12.620034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:57.946 [2024-07-26 03:59:12.620048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:30:57.946 [2024-07-26 03:59:12.620059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.946 [2024-07-26 03:59:12.620093] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:57.946 [2024-07-26 03:59:12.624367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.946 [2024-07-26 03:59:12.624415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:57.946 [2024-07-26 03:59:12.624432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.282 ms 00:30:57.946 [2024-07-26 03:59:12.624444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.946 [2024-07-26 03:59:12.624498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.946 [2024-07-26 03:59:12.624515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:57.946 [2024-07-26 03:59:12.624528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:30:57.946 [2024-07-26 03:59:12.624540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.946 [2024-07-26 03:59:12.624595] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:57.946 [2024-07-26 03:59:12.624627] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:57.946 [2024-07-26 03:59:12.624675] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:57.946 [2024-07-26 03:59:12.624702] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:30:57.946 [2024-07-26 03:59:12.624813] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:57.946 [2024-07-26 03:59:12.624867] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:57.946 [2024-07-26 03:59:12.624884] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:30:57.946 [2024-07-26 03:59:12.624898] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:57.946 [2024-07-26 03:59:12.624912] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:57.946 [2024-07-26 03:59:12.624925] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:57.946 [2024-07-26 03:59:12.624938] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:57.946 [2024-07-26 03:59:12.624957] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:57.946 [2024-07-26 03:59:12.624969] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:57.946 [2024-07-26 03:59:12.624981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.946 [2024-07-26 03:59:12.624998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:57.946 [2024-07-26 03:59:12.625012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:30:57.946 [2024-07-26 03:59:12.625023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.946 [2024-07-26 03:59:12.625122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.946 [2024-07-26 03:59:12.625139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:57.946 [2024-07-26 03:59:12.625151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:30:57.946 [2024-07-26 03:59:12.625162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.946 [2024-07-26 03:59:12.625274] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:57.946 [2024-07-26 03:59:12.625292] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:57.946 [2024-07-26 03:59:12.625310] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:57.946 [2024-07-26 03:59:12.625321] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:57.946 [2024-07-26 03:59:12.625333] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:57.946 [2024-07-26 03:59:12.625346] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:57.946 [2024-07-26 03:59:12.625357] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:57.946 [2024-07-26 03:59:12.625367] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:57.946 [2024-07-26 03:59:12.625377] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:57.946 [2024-07-26 03:59:12.625387] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:57.946 [2024-07-26 03:59:12.625398] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:57.946 [2024-07-26 03:59:12.625408] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:57.946 [2024-07-26 03:59:12.625418] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:57.946 [2024-07-26 03:59:12.625428] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:57.946 [2024-07-26 03:59:12.625439] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:57.946 [2024-07-26 03:59:12.625449] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:57.946 [2024-07-26 03:59:12.625460] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:57.946 [2024-07-26 03:59:12.625470] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:57.946 [2024-07-26 03:59:12.625480] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:57.946 [2024-07-26 03:59:12.625490] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:57.946 [2024-07-26 03:59:12.625515] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:57.946 [2024-07-26 03:59:12.625526] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:57.946 [2024-07-26 03:59:12.625536] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:57.946 [2024-07-26 03:59:12.625546] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:57.946 [2024-07-26 03:59:12.625556] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:57.946 [2024-07-26 03:59:12.625566] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:57.946 [2024-07-26 03:59:12.625576] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:57.946 [2024-07-26 03:59:12.625586] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:57.946 [2024-07-26 03:59:12.625596] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:57.946 [2024-07-26 03:59:12.625606] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:57.946 [2024-07-26 03:59:12.625616] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:57.946 [2024-07-26 03:59:12.625626] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:57.946 [2024-07-26 03:59:12.625636] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:57.946 [2024-07-26 03:59:12.625646] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:57.946 [2024-07-26 03:59:12.625657] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:57.946 [2024-07-26 03:59:12.625667] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:57.946 [2024-07-26 03:59:12.625677] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:57.946 [2024-07-26 03:59:12.625687] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:57.946 [2024-07-26 03:59:12.625698] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:57.946 [2024-07-26 03:59:12.625708] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:57.946 [2024-07-26 03:59:12.625718] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:57.946 [2024-07-26 03:59:12.625728] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:57.946 [2024-07-26 03:59:12.625738] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:57.946 [2024-07-26 03:59:12.625748] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:57.946 [2024-07-26 03:59:12.625759] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:57.946 [2024-07-26 03:59:12.625770] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:57.946 [2024-07-26 03:59:12.625781] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:57.946 [2024-07-26 03:59:12.625793] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:57.946 [2024-07-26 03:59:12.625803] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:57.946 [2024-07-26 03:59:12.625813] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:57.946 [2024-07-26 03:59:12.625843] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:57.946 [2024-07-26 03:59:12.625854] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:57.946 [2024-07-26 03:59:12.625865] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:57.946 [2024-07-26 03:59:12.625877] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:57.946 [2024-07-26 03:59:12.625891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:57.946 [2024-07-26 03:59:12.625904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:57.946 [2024-07-26 03:59:12.625917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:57.946 [2024-07-26 03:59:12.625928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:57.946 [2024-07-26 03:59:12.625940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:57.946 [2024-07-26 03:59:12.625959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:57.946 [2024-07-26 03:59:12.625971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:57.947 [2024-07-26 03:59:12.625982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:57.947 [2024-07-26 03:59:12.625993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:57.947 [2024-07-26 03:59:12.626004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:57.947 [2024-07-26 03:59:12.626015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:57.947 [2024-07-26 03:59:12.626026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:57.947 [2024-07-26 03:59:12.626038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:57.947 [2024-07-26 03:59:12.626049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:57.947 [2024-07-26 03:59:12.626066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:57.947 [2024-07-26 03:59:12.626085] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:57.947 [2024-07-26 03:59:12.626100] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:57.947 [2024-07-26 03:59:12.626118] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:57.947 [2024-07-26 03:59:12.626131] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:57.947 [2024-07-26 03:59:12.626142] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:57.947 [2024-07-26 03:59:12.626153] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:57.947 [2024-07-26 03:59:12.626166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.947 [2024-07-26 03:59:12.626178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:57.947 [2024-07-26 03:59:12.626190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.960 ms 00:30:57.947 [2024-07-26 03:59:12.626202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.947 [2024-07-26 03:59:12.670271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.947 [2024-07-26 03:59:12.670336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:57.947 [2024-07-26 03:59:12.670359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.973 ms 00:30:57.947 [2024-07-26 03:59:12.670372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.947 [2024-07-26 03:59:12.670496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.947 [2024-07-26 03:59:12.670515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:57.947 [2024-07-26 03:59:12.670528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:30:57.947 [2024-07-26 03:59:12.670539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.947 [2024-07-26 03:59:12.709039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.947 [2024-07-26 03:59:12.709105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:57.947 [2024-07-26 03:59:12.709126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.402 ms 00:30:57.947 [2024-07-26 03:59:12.709138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.947 [2024-07-26 03:59:12.709211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.947 [2024-07-26 03:59:12.709228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:57.947 [2024-07-26 03:59:12.709242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:57.947 [2024-07-26 03:59:12.709259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.947 [2024-07-26 03:59:12.709632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.947 [2024-07-26 03:59:12.709651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:57.947 [2024-07-26 03:59:12.709665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:30:57.947 [2024-07-26 03:59:12.709676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.947 [2024-07-26 03:59:12.709854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.947 [2024-07-26 03:59:12.709875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:57.947 [2024-07-26 03:59:12.709888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:30:57.947 [2024-07-26 03:59:12.709900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.947 [2024-07-26 03:59:12.725893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.947 [2024-07-26 03:59:12.725952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:57.947 [2024-07-26 03:59:12.725971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.956 ms 00:30:57.947 [2024-07-26 03:59:12.725990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.947 [2024-07-26 03:59:12.742336] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:57.947 [2024-07-26 03:59:12.742421] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:57.947 [2024-07-26 03:59:12.742446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.947 [2024-07-26 03:59:12.742463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:57.947 [2024-07-26 03:59:12.742487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.285 ms 00:30:57.947 [2024-07-26 03:59:12.742500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.947 [2024-07-26 03:59:12.773299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.947 [2024-07-26 03:59:12.773399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:57.947 [2024-07-26 03:59:12.773421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.709 ms 00:30:57.947 [2024-07-26 03:59:12.773447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.947 [2024-07-26 03:59:12.789569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.947 [2024-07-26 03:59:12.789622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:57.947 [2024-07-26 03:59:12.789642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.036 ms 00:30:57.947 [2024-07-26 03:59:12.789655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.947 [2024-07-26 03:59:12.805155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.947 [2024-07-26 03:59:12.805209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:57.947 [2024-07-26 03:59:12.805228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.448 ms 00:30:57.947 [2024-07-26 03:59:12.805240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.947 [2024-07-26 03:59:12.806099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.947 [2024-07-26 03:59:12.806139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:57.947 [2024-07-26 03:59:12.806155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:30:57.947 [2024-07-26 03:59:12.806168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.206 [2024-07-26 03:59:12.880984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.206 [2024-07-26 03:59:12.881059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:58.206 [2024-07-26 03:59:12.881082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.783 ms 00:30:58.206 [2024-07-26 03:59:12.881104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.206 [2024-07-26 03:59:12.893744] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:58.206 [2024-07-26 03:59:12.896362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.206 [2024-07-26 03:59:12.896401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:58.206 [2024-07-26 03:59:12.896421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.179 ms 00:30:58.206 [2024-07-26 03:59:12.896433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.206 [2024-07-26 03:59:12.896561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.206 [2024-07-26 03:59:12.896582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:58.206 [2024-07-26 03:59:12.896597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:58.206 [2024-07-26 03:59:12.896608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.206 [2024-07-26 03:59:12.897243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.206 [2024-07-26 03:59:12.897272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:58.206 [2024-07-26 03:59:12.897287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.574 ms 00:30:58.206 [2024-07-26 03:59:12.897299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.206 [2024-07-26 03:59:12.897334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.206 [2024-07-26 03:59:12.897351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:58.206 [2024-07-26 03:59:12.897363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:58.206 [2024-07-26 03:59:12.897375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.206 [2024-07-26 03:59:12.897416] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:58.206 [2024-07-26 03:59:12.897434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.206 [2024-07-26 03:59:12.897450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:58.206 [2024-07-26 03:59:12.897462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:30:58.206 [2024-07-26 03:59:12.897474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.206 [2024-07-26 03:59:12.928710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.206 [2024-07-26 03:59:12.928790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:58.206 [2024-07-26 03:59:12.928812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.204 ms 00:30:58.206 [2024-07-26 03:59:12.928866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.206 [2024-07-26 03:59:12.928989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.206 [2024-07-26 03:59:12.929009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:58.206 [2024-07-26 03:59:12.929022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:30:58.207 [2024-07-26 03:59:12.929035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.207 [2024-07-26 03:59:12.930428] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 333.775 ms, result 0 00:31:36.677  Copying: 26/1024 [MB] (26 MBps) Copying: 52/1024 [MB] (25 MBps) Copying: 81/1024 [MB] (29 MBps) Copying: 109/1024 [MB] (28 MBps) Copying: 136/1024 [MB] (27 MBps) Copying: 162/1024 [MB] (25 MBps) Copying: 189/1024 [MB] (27 MBps) Copying: 216/1024 [MB] (27 MBps) Copying: 243/1024 [MB] (26 MBps) Copying: 270/1024 [MB] (27 MBps) Copying: 296/1024 [MB] (25 MBps) Copying: 321/1024 [MB] (24 MBps) Copying: 347/1024 [MB] (26 MBps) Copying: 376/1024 [MB] (28 MBps) Copying: 403/1024 [MB] (27 MBps) Copying: 430/1024 [MB] (26 MBps) Copying: 457/1024 [MB] (27 MBps) Copying: 485/1024 [MB] (27 MBps) Copying: 510/1024 [MB] (25 MBps) Copying: 537/1024 [MB] (27 MBps) Copying: 564/1024 [MB] (26 MBps) Copying: 590/1024 [MB] (26 MBps) Copying: 620/1024 [MB] (29 MBps) Copying: 648/1024 [MB] (27 MBps) Copying: 676/1024 [MB] (28 MBps) Copying: 702/1024 [MB] (26 MBps) Copying: 730/1024 [MB] (27 MBps) Copying: 756/1024 [MB] (25 MBps) Copying: 783/1024 [MB] (27 MBps) Copying: 809/1024 [MB] (26 MBps) Copying: 836/1024 [MB] (26 MBps) Copying: 864/1024 [MB] (27 MBps) Copying: 889/1024 [MB] (25 MBps) Copying: 914/1024 [MB] (25 MBps) Copying: 938/1024 [MB] (23 MBps) Copying: 964/1024 [MB] (26 MBps) Copying: 990/1024 [MB] (25 MBps) Copying: 1016/1024 [MB] (26 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-07-26 03:59:51.499111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.677 [2024-07-26 03:59:51.499190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:36.677 [2024-07-26 03:59:51.499212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:36.677 [2024-07-26 03:59:51.499225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.677 [2024-07-26 03:59:51.499257] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:36.677 [2024-07-26 03:59:51.504226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.677 [2024-07-26 03:59:51.504275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:36.677 [2024-07-26 03:59:51.504296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.944 ms 00:31:36.677 [2024-07-26 03:59:51.504322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.677 [2024-07-26 03:59:51.504651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.677 [2024-07-26 03:59:51.504686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:36.677 [2024-07-26 03:59:51.504705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:31:36.677 [2024-07-26 03:59:51.504721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.677 [2024-07-26 03:59:51.509743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.677 [2024-07-26 03:59:51.509784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:36.677 [2024-07-26 03:59:51.509804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.984 ms 00:31:36.677 [2024-07-26 03:59:51.509839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.677 [2024-07-26 03:59:51.517054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.677 [2024-07-26 03:59:51.517084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:36.677 [2024-07-26 03:59:51.517115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.175 ms 00:31:36.677 [2024-07-26 03:59:51.517127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.677 [2024-07-26 03:59:51.548676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.677 [2024-07-26 03:59:51.548732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:36.677 [2024-07-26 03:59:51.548767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.473 ms 00:31:36.677 [2024-07-26 03:59:51.548779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.974 [2024-07-26 03:59:51.571257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.974 [2024-07-26 03:59:51.571338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:36.974 [2024-07-26 03:59:51.571369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.442 ms 00:31:36.974 [2024-07-26 03:59:51.571391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.974 [2024-07-26 03:59:51.574681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.974 [2024-07-26 03:59:51.574743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:36.974 [2024-07-26 03:59:51.574784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.187 ms 00:31:36.974 [2024-07-26 03:59:51.574808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.974 [2024-07-26 03:59:51.609627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.974 [2024-07-26 03:59:51.609744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:31:36.974 [2024-07-26 03:59:51.609782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.758 ms 00:31:36.974 [2024-07-26 03:59:51.609794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.974 [2024-07-26 03:59:51.640448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.974 [2024-07-26 03:59:51.640561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:31:36.974 [2024-07-26 03:59:51.640601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.601 ms 00:31:36.974 [2024-07-26 03:59:51.640614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.974 [2024-07-26 03:59:51.671846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.974 [2024-07-26 03:59:51.671937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:36.974 [2024-07-26 03:59:51.671992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.182 ms 00:31:36.974 [2024-07-26 03:59:51.672004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.974 [2024-07-26 03:59:51.701497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.974 [2024-07-26 03:59:51.701570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:36.974 [2024-07-26 03:59:51.701606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.400 ms 00:31:36.974 [2024-07-26 03:59:51.701617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.974 [2024-07-26 03:59:51.701647] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:36.974 [2024-07-26 03:59:51.701668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:36.974 [2024-07-26 03:59:51.701682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3328 / 261120 wr_cnt: 1 state: open 00:31:36.974 [2024-07-26 03:59:51.701694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:36.974 [2024-07-26 03:59:51.701706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:36.974 [2024-07-26 03:59:51.701717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:36.974 [2024-07-26 03:59:51.701728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:36.974 [2024-07-26 03:59:51.701740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:36.974 [2024-07-26 03:59:51.701751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:36.974 [2024-07-26 03:59:51.701779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:36.974 [2024-07-26 03:59:51.701791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:36.974 [2024-07-26 03:59:51.701802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:36.974 [2024-07-26 03:59:51.701814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:36.974 [2024-07-26 03:59:51.701826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:36.974 [2024-07-26 03:59:51.701837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:36.974 [2024-07-26 03:59:51.701860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:36.974 [2024-07-26 03:59:51.701874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:36.974 [2024-07-26 03:59:51.701886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:36.974 [2024-07-26 03:59:51.701898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:36.974 [2024-07-26 03:59:51.701909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:36.974 [2024-07-26 03:59:51.701921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:36.974 [2024-07-26 03:59:51.701933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:36.974 [2024-07-26 03:59:51.701945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:36.974 [2024-07-26 03:59:51.701956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:36.974 [2024-07-26 03:59:51.701968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:36.974 [2024-07-26 03:59:51.701979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:36.974 [2024-07-26 03:59:51.701991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:36.974 [2024-07-26 03:59:51.702002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:36.974 [2024-07-26 03:59:51.702014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:36.974 [2024-07-26 03:59:51.702025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:36.974 [2024-07-26 03:59:51.702037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:36.974 [2024-07-26 03:59:51.702049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:36.974 [2024-07-26 03:59:51.702060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:36.974 [2024-07-26 03:59:51.702072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:36.974 [2024-07-26 03:59:51.702096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:36.974 [2024-07-26 03:59:51.702108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:36.974 [2024-07-26 03:59:51.702119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:36.974 [2024-07-26 03:59:51.702131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:36.974 [2024-07-26 03:59:51.702143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:36.974 [2024-07-26 03:59:51.702154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:36.974 [2024-07-26 03:59:51.702166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:36.974 [2024-07-26 03:59:51.702177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:36.975 [2024-07-26 03:59:51.702922] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:36.975 [2024-07-26 03:59:51.702933] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4060d2b9-ed7f-4848-ac63-bd5cd8a344c5 00:31:36.975 [2024-07-26 03:59:51.702953] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264448 00:31:36.975 [2024-07-26 03:59:51.702964] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:36.975 [2024-07-26 03:59:51.702975] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:36.975 [2024-07-26 03:59:51.702987] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:36.975 [2024-07-26 03:59:51.702998] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:36.975 [2024-07-26 03:59:51.703009] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:36.975 [2024-07-26 03:59:51.703020] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:36.975 [2024-07-26 03:59:51.703030] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:36.975 [2024-07-26 03:59:51.703051] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:36.975 [2024-07-26 03:59:51.703063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.975 [2024-07-26 03:59:51.703075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:36.975 [2024-07-26 03:59:51.703093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.417 ms 00:31:36.975 [2024-07-26 03:59:51.703104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.975 [2024-07-26 03:59:51.719765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.975 [2024-07-26 03:59:51.719867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:36.975 [2024-07-26 03:59:51.719902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.608 ms 00:31:36.975 [2024-07-26 03:59:51.719914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.975 [2024-07-26 03:59:51.720368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.975 [2024-07-26 03:59:51.720401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:36.975 [2024-07-26 03:59:51.720416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:31:36.975 [2024-07-26 03:59:51.720435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.975 [2024-07-26 03:59:51.757886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:36.975 [2024-07-26 03:59:51.757955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:36.975 [2024-07-26 03:59:51.757974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:36.975 [2024-07-26 03:59:51.757986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.975 [2024-07-26 03:59:51.758069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:36.975 [2024-07-26 03:59:51.758085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:36.975 [2024-07-26 03:59:51.758098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:36.975 [2024-07-26 03:59:51.758115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.975 [2024-07-26 03:59:51.758208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:36.975 [2024-07-26 03:59:51.758227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:36.975 [2024-07-26 03:59:51.758251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:36.975 [2024-07-26 03:59:51.758263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.975 [2024-07-26 03:59:51.758285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:36.976 [2024-07-26 03:59:51.758299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:36.976 [2024-07-26 03:59:51.758312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:36.976 [2024-07-26 03:59:51.758323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.976 [2024-07-26 03:59:51.857694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:36.976 [2024-07-26 03:59:51.857766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:36.976 [2024-07-26 03:59:51.857786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:36.976 [2024-07-26 03:59:51.857799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.234 [2024-07-26 03:59:51.942927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:37.234 [2024-07-26 03:59:51.942995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:37.234 [2024-07-26 03:59:51.943015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:37.234 [2024-07-26 03:59:51.943046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.234 [2024-07-26 03:59:51.943159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:37.234 [2024-07-26 03:59:51.943178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:37.234 [2024-07-26 03:59:51.943190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:37.234 [2024-07-26 03:59:51.943202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.234 [2024-07-26 03:59:51.943250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:37.234 [2024-07-26 03:59:51.943266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:37.234 [2024-07-26 03:59:51.943285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:37.234 [2024-07-26 03:59:51.943296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.234 [2024-07-26 03:59:51.943419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:37.234 [2024-07-26 03:59:51.943440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:37.234 [2024-07-26 03:59:51.943453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:37.234 [2024-07-26 03:59:51.943464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.234 [2024-07-26 03:59:51.943512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:37.234 [2024-07-26 03:59:51.943532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:37.234 [2024-07-26 03:59:51.943544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:37.234 [2024-07-26 03:59:51.943556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.234 [2024-07-26 03:59:51.943606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:37.234 [2024-07-26 03:59:51.943622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:37.234 [2024-07-26 03:59:51.943634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:37.234 [2024-07-26 03:59:51.943646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.234 [2024-07-26 03:59:51.943698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:37.234 [2024-07-26 03:59:51.943715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:37.234 [2024-07-26 03:59:51.943727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:37.234 [2024-07-26 03:59:51.943738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.234 [2024-07-26 03:59:51.943912] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 444.773 ms, result 0 00:31:38.628 00:31:38.628 00:31:38.628 03:59:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:31:40.529 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:31:40.529 03:59:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:31:40.529 03:59:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:31:40.529 03:59:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:40.529 03:59:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:40.787 03:59:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:31:40.787 03:59:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:40.787 03:59:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:31:40.787 03:59:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 83868 00:31:40.787 03:59:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@948 -- # '[' -z 83868 ']' 00:31:40.787 03:59:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@952 -- # kill -0 83868 00:31:40.787 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (83868) - No such process 00:31:40.787 Process with pid 83868 is not found 00:31:40.787 03:59:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@975 -- # echo 'Process with pid 83868 is not found' 00:31:40.787 03:59:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:31:41.045 03:59:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:31:41.045 Remove shared memory files 00:31:41.045 03:59:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:41.045 03:59:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:31:41.045 03:59:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:31:41.045 03:59:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:31:41.045 03:59:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:41.045 03:59:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:31:41.045 ************************************ 00:31:41.045 END TEST ftl_dirty_shutdown 00:31:41.045 ************************************ 00:31:41.045 00:31:41.045 real 3m40.994s 00:31:41.045 user 4m13.694s 00:31:41.045 sys 0m37.480s 00:31:41.045 03:59:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:41.045 03:59:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:41.303 03:59:55 ftl -- common/autotest_common.sh@1142 -- # return 0 00:31:41.303 03:59:55 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:31:41.303 03:59:55 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:31:41.303 03:59:55 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:41.303 03:59:55 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:41.303 ************************************ 00:31:41.303 START TEST ftl_upgrade_shutdown 00:31:41.303 ************************************ 00:31:41.303 03:59:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:31:41.303 * Looking for test storage... 00:31:41.303 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=86149 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 86149 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 86149 ']' 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:41.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:41.303 03:59:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:41.303 [2024-07-26 03:59:56.167502] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:31:41.303 [2024-07-26 03:59:56.167682] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86149 ] 00:31:41.561 [2024-07-26 03:59:56.342317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.819 [2024-07-26 03:59:56.582808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:42.753 03:59:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:42.753 03:59:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:31:42.753 03:59:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:42.753 03:59:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:31:42.753 03:59:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:31:42.753 03:59:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:42.753 03:59:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:31:42.753 03:59:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:42.753 03:59:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:31:42.753 03:59:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:42.753 03:59:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:31:42.753 03:59:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:42.753 03:59:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:31:42.753 03:59:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:42.753 03:59:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:31:42.753 03:59:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:42.753 03:59:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:31:42.753 03:59:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:31:42.753 03:59:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:31:42.753 03:59:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:31:42.753 03:59:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:31:42.753 03:59:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:31:42.753 03:59:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:31:43.012 03:59:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:31:43.012 03:59:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:31:43.012 03:59:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:31:43.012 03:59:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:31:43.012 03:59:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:31:43.012 03:59:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:31:43.012 03:59:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:31:43.012 03:59:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:31:43.270 03:59:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:31:43.270 { 00:31:43.270 "name": "basen1", 00:31:43.270 "aliases": [ 00:31:43.270 "e92fe4b3-ff45-4c7d-9d98-704dbb6955c9" 00:31:43.270 ], 00:31:43.270 "product_name": "NVMe disk", 00:31:43.270 "block_size": 4096, 00:31:43.270 "num_blocks": 1310720, 00:31:43.271 "uuid": "e92fe4b3-ff45-4c7d-9d98-704dbb6955c9", 00:31:43.271 "assigned_rate_limits": { 00:31:43.271 "rw_ios_per_sec": 0, 00:31:43.271 "rw_mbytes_per_sec": 0, 00:31:43.271 "r_mbytes_per_sec": 0, 00:31:43.271 "w_mbytes_per_sec": 0 00:31:43.271 }, 00:31:43.271 "claimed": true, 00:31:43.271 "claim_type": "read_many_write_one", 00:31:43.271 "zoned": false, 00:31:43.271 "supported_io_types": { 00:31:43.271 "read": true, 00:31:43.271 "write": true, 00:31:43.271 "unmap": true, 00:31:43.271 "flush": true, 00:31:43.271 "reset": true, 00:31:43.271 "nvme_admin": true, 00:31:43.271 "nvme_io": true, 00:31:43.271 "nvme_io_md": false, 00:31:43.271 "write_zeroes": true, 00:31:43.271 "zcopy": false, 00:31:43.271 "get_zone_info": false, 00:31:43.271 "zone_management": false, 00:31:43.271 "zone_append": false, 00:31:43.271 "compare": true, 00:31:43.271 "compare_and_write": false, 00:31:43.271 "abort": true, 00:31:43.271 "seek_hole": false, 00:31:43.271 "seek_data": false, 00:31:43.271 "copy": true, 00:31:43.271 "nvme_iov_md": false 00:31:43.271 }, 00:31:43.271 "driver_specific": { 00:31:43.271 "nvme": [ 00:31:43.271 { 00:31:43.271 "pci_address": "0000:00:11.0", 00:31:43.271 "trid": { 00:31:43.271 "trtype": "PCIe", 00:31:43.271 "traddr": "0000:00:11.0" 00:31:43.271 }, 00:31:43.271 "ctrlr_data": { 00:31:43.271 "cntlid": 0, 00:31:43.271 "vendor_id": "0x1b36", 00:31:43.271 "model_number": "QEMU NVMe Ctrl", 00:31:43.271 "serial_number": "12341", 00:31:43.271 "firmware_revision": "8.0.0", 00:31:43.271 "subnqn": "nqn.2019-08.org.qemu:12341", 00:31:43.271 "oacs": { 00:31:43.271 "security": 0, 00:31:43.271 "format": 1, 00:31:43.271 "firmware": 0, 00:31:43.271 "ns_manage": 1 00:31:43.271 }, 00:31:43.271 "multi_ctrlr": false, 00:31:43.271 "ana_reporting": false 00:31:43.271 }, 00:31:43.271 "vs": { 00:31:43.271 "nvme_version": "1.4" 00:31:43.271 }, 00:31:43.271 "ns_data": { 00:31:43.271 "id": 1, 00:31:43.271 "can_share": false 00:31:43.271 } 00:31:43.271 } 00:31:43.271 ], 00:31:43.271 "mp_policy": "active_passive" 00:31:43.271 } 00:31:43.271 } 00:31:43.271 ]' 00:31:43.271 03:59:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:31:43.271 03:59:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:31:43.271 03:59:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:31:43.271 03:59:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:31:43.271 03:59:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:31:43.271 03:59:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:31:43.271 03:59:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:31:43.271 03:59:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:31:43.271 03:59:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:31:43.271 03:59:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:43.271 03:59:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:43.861 03:59:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=505f29ab-a1ee-42e5-83a1-ec9899d5a7a3 00:31:43.861 03:59:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:31:43.861 03:59:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 505f29ab-a1ee-42e5-83a1-ec9899d5a7a3 00:31:44.118 03:59:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:31:44.376 03:59:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=757bbeb4-531b-40e2-ad61-ce390074bd54 00:31:44.376 03:59:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 757bbeb4-531b-40e2-ad61-ce390074bd54 00:31:44.634 03:59:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=3180e654-af4d-4787-b7bb-fc71ba62fa68 00:31:44.634 03:59:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 3180e654-af4d-4787-b7bb-fc71ba62fa68 ]] 00:31:44.634 03:59:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 3180e654-af4d-4787-b7bb-fc71ba62fa68 5120 00:31:44.634 03:59:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:31:44.634 03:59:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:31:44.634 03:59:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=3180e654-af4d-4787-b7bb-fc71ba62fa68 00:31:44.634 03:59:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:31:44.634 03:59:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 3180e654-af4d-4787-b7bb-fc71ba62fa68 00:31:44.634 03:59:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=3180e654-af4d-4787-b7bb-fc71ba62fa68 00:31:44.634 03:59:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:31:44.634 03:59:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:31:44.634 03:59:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:31:44.634 03:59:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3180e654-af4d-4787-b7bb-fc71ba62fa68 00:31:44.893 03:59:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:31:44.893 { 00:31:44.893 "name": "3180e654-af4d-4787-b7bb-fc71ba62fa68", 00:31:44.893 "aliases": [ 00:31:44.893 "lvs/basen1p0" 00:31:44.893 ], 00:31:44.893 "product_name": "Logical Volume", 00:31:44.893 "block_size": 4096, 00:31:44.893 "num_blocks": 5242880, 00:31:44.893 "uuid": "3180e654-af4d-4787-b7bb-fc71ba62fa68", 00:31:44.893 "assigned_rate_limits": { 00:31:44.893 "rw_ios_per_sec": 0, 00:31:44.893 "rw_mbytes_per_sec": 0, 00:31:44.893 "r_mbytes_per_sec": 0, 00:31:44.893 "w_mbytes_per_sec": 0 00:31:44.893 }, 00:31:44.893 "claimed": false, 00:31:44.893 "zoned": false, 00:31:44.893 "supported_io_types": { 00:31:44.893 "read": true, 00:31:44.893 "write": true, 00:31:44.893 "unmap": true, 00:31:44.893 "flush": false, 00:31:44.893 "reset": true, 00:31:44.893 "nvme_admin": false, 00:31:44.893 "nvme_io": false, 00:31:44.893 "nvme_io_md": false, 00:31:44.893 "write_zeroes": true, 00:31:44.893 "zcopy": false, 00:31:44.893 "get_zone_info": false, 00:31:44.893 "zone_management": false, 00:31:44.893 "zone_append": false, 00:31:44.893 "compare": false, 00:31:44.893 "compare_and_write": false, 00:31:44.893 "abort": false, 00:31:44.893 "seek_hole": true, 00:31:44.893 "seek_data": true, 00:31:44.893 "copy": false, 00:31:44.893 "nvme_iov_md": false 00:31:44.893 }, 00:31:44.893 "driver_specific": { 00:31:44.893 "lvol": { 00:31:44.893 "lvol_store_uuid": "757bbeb4-531b-40e2-ad61-ce390074bd54", 00:31:44.893 "base_bdev": "basen1", 00:31:44.893 "thin_provision": true, 00:31:44.893 "num_allocated_clusters": 0, 00:31:44.893 "snapshot": false, 00:31:44.893 "clone": false, 00:31:44.893 "esnap_clone": false 00:31:44.893 } 00:31:44.893 } 00:31:44.893 } 00:31:44.893 ]' 00:31:44.893 03:59:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:31:44.893 03:59:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:31:44.893 03:59:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:31:44.893 03:59:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:31:44.893 03:59:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:31:44.893 03:59:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:31:44.893 03:59:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:31:44.893 03:59:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:31:44.893 03:59:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:31:45.151 04:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:31:45.151 04:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:31:45.151 04:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:31:45.718 04:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:31:45.718 04:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:31:45.718 04:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 3180e654-af4d-4787-b7bb-fc71ba62fa68 -c cachen1p0 --l2p_dram_limit 2 00:31:45.718 [2024-07-26 04:00:00.533765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.718 [2024-07-26 04:00:00.533863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:45.718 [2024-07-26 04:00:00.533888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:45.718 [2024-07-26 04:00:00.533904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.718 [2024-07-26 04:00:00.533989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.718 [2024-07-26 04:00:00.534013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:45.718 [2024-07-26 04:00:00.534030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:31:45.718 [2024-07-26 04:00:00.534052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.718 [2024-07-26 04:00:00.534101] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:45.718 [2024-07-26 04:00:00.535133] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:45.718 [2024-07-26 04:00:00.535178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.718 [2024-07-26 04:00:00.535199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:45.718 [2024-07-26 04:00:00.535222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.088 ms 00:31:45.718 [2024-07-26 04:00:00.535235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.718 [2024-07-26 04:00:00.535398] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 217ba5db-72dd-41a3-b600-c765b1c16a47 00:31:45.718 [2024-07-26 04:00:00.536525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.718 [2024-07-26 04:00:00.536569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:31:45.718 [2024-07-26 04:00:00.536591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:31:45.718 [2024-07-26 04:00:00.536605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.718 [2024-07-26 04:00:00.541394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.718 [2024-07-26 04:00:00.541454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:45.718 [2024-07-26 04:00:00.541475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.699 ms 00:31:45.718 [2024-07-26 04:00:00.541488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.718 [2024-07-26 04:00:00.541565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.718 [2024-07-26 04:00:00.541584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:45.718 [2024-07-26 04:00:00.541603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:31:45.718 [2024-07-26 04:00:00.541622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.718 [2024-07-26 04:00:00.541745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.718 [2024-07-26 04:00:00.541770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:45.718 [2024-07-26 04:00:00.541813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:31:45.718 [2024-07-26 04:00:00.541854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.718 [2024-07-26 04:00:00.541897] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:45.718 [2024-07-26 04:00:00.546490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.718 [2024-07-26 04:00:00.546536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:45.718 [2024-07-26 04:00:00.546554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.608 ms 00:31:45.718 [2024-07-26 04:00:00.546568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.718 [2024-07-26 04:00:00.546610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.718 [2024-07-26 04:00:00.546637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:45.718 [2024-07-26 04:00:00.546663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:45.718 [2024-07-26 04:00:00.546682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.718 [2024-07-26 04:00:00.546755] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:31:45.718 [2024-07-26 04:00:00.546955] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:45.718 [2024-07-26 04:00:00.546988] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:45.718 [2024-07-26 04:00:00.547017] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:31:45.718 [2024-07-26 04:00:00.547060] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:45.718 [2024-07-26 04:00:00.547108] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:45.718 [2024-07-26 04:00:00.547142] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:45.718 [2024-07-26 04:00:00.547166] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:45.718 [2024-07-26 04:00:00.547178] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:45.718 [2024-07-26 04:00:00.547191] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:45.718 [2024-07-26 04:00:00.547205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.718 [2024-07-26 04:00:00.547219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:45.718 [2024-07-26 04:00:00.547233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.454 ms 00:31:45.718 [2024-07-26 04:00:00.547246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.718 [2024-07-26 04:00:00.547346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.718 [2024-07-26 04:00:00.547379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:45.718 [2024-07-26 04:00:00.547404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:31:45.718 [2024-07-26 04:00:00.547435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.718 [2024-07-26 04:00:00.547559] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:45.718 [2024-07-26 04:00:00.547592] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:45.718 [2024-07-26 04:00:00.547608] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:45.718 [2024-07-26 04:00:00.547633] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:45.718 [2024-07-26 04:00:00.547657] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:45.718 [2024-07-26 04:00:00.547677] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:45.718 [2024-07-26 04:00:00.547704] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:45.718 [2024-07-26 04:00:00.547728] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:45.718 [2024-07-26 04:00:00.547752] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:45.718 [2024-07-26 04:00:00.547772] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:45.718 [2024-07-26 04:00:00.547784] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:45.718 [2024-07-26 04:00:00.547800] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:45.718 [2024-07-26 04:00:00.547812] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:45.718 [2024-07-26 04:00:00.547844] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:45.718 [2024-07-26 04:00:00.547857] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:45.718 [2024-07-26 04:00:00.547870] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:45.718 [2024-07-26 04:00:00.547881] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:45.718 [2024-07-26 04:00:00.547898] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:45.718 [2024-07-26 04:00:00.547917] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:45.718 [2024-07-26 04:00:00.547943] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:45.718 [2024-07-26 04:00:00.547964] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:45.718 [2024-07-26 04:00:00.547985] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:45.718 [2024-07-26 04:00:00.547997] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:45.718 [2024-07-26 04:00:00.548010] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:45.718 [2024-07-26 04:00:00.548022] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:45.718 [2024-07-26 04:00:00.548035] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:45.718 [2024-07-26 04:00:00.548046] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:45.718 [2024-07-26 04:00:00.548062] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:45.718 [2024-07-26 04:00:00.548083] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:45.719 [2024-07-26 04:00:00.548108] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:45.719 [2024-07-26 04:00:00.548125] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:45.719 [2024-07-26 04:00:00.548139] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:45.719 [2024-07-26 04:00:00.548155] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:45.719 [2024-07-26 04:00:00.548182] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:45.719 [2024-07-26 04:00:00.548198] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:45.719 [2024-07-26 04:00:00.548212] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:45.719 [2024-07-26 04:00:00.548227] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:45.719 [2024-07-26 04:00:00.548253] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:45.719 [2024-07-26 04:00:00.548274] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:45.719 [2024-07-26 04:00:00.548289] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:45.719 [2024-07-26 04:00:00.548301] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:45.719 [2024-07-26 04:00:00.548314] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:45.719 [2024-07-26 04:00:00.548325] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:45.719 [2024-07-26 04:00:00.548338] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:45.719 [2024-07-26 04:00:00.548350] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:45.719 [2024-07-26 04:00:00.548373] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:45.719 [2024-07-26 04:00:00.548396] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:45.719 [2024-07-26 04:00:00.548420] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:45.719 [2024-07-26 04:00:00.548433] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:45.719 [2024-07-26 04:00:00.548449] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:45.719 [2024-07-26 04:00:00.548461] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:45.719 [2024-07-26 04:00:00.548474] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:45.719 [2024-07-26 04:00:00.548485] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:45.719 [2024-07-26 04:00:00.548502] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:45.719 [2024-07-26 04:00:00.548520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:45.719 [2024-07-26 04:00:00.548536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:45.719 [2024-07-26 04:00:00.548558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:45.719 [2024-07-26 04:00:00.548584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:45.719 [2024-07-26 04:00:00.548608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:45.719 [2024-07-26 04:00:00.548640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:45.719 [2024-07-26 04:00:00.548655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:45.719 [2024-07-26 04:00:00.548671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:45.719 [2024-07-26 04:00:00.548684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:45.719 [2024-07-26 04:00:00.548698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:45.719 [2024-07-26 04:00:00.548710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:45.719 [2024-07-26 04:00:00.548726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:45.719 [2024-07-26 04:00:00.548745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:45.719 [2024-07-26 04:00:00.548771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:45.719 [2024-07-26 04:00:00.548795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:45.719 [2024-07-26 04:00:00.548847] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:45.719 [2024-07-26 04:00:00.548872] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:45.719 [2024-07-26 04:00:00.548889] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:45.719 [2024-07-26 04:00:00.548902] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:45.719 [2024-07-26 04:00:00.548919] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:45.719 [2024-07-26 04:00:00.548941] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:45.719 [2024-07-26 04:00:00.548966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.719 [2024-07-26 04:00:00.548980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:45.719 [2024-07-26 04:00:00.548997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.470 ms 00:31:45.719 [2024-07-26 04:00:00.549010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.719 [2024-07-26 04:00:00.549077] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:31:45.719 [2024-07-26 04:00:00.549107] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:31:48.250 [2024-07-26 04:00:02.561567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.250 [2024-07-26 04:00:02.561646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:31:48.250 [2024-07-26 04:00:02.561674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2012.498 ms 00:31:48.250 [2024-07-26 04:00:02.561687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.250 [2024-07-26 04:00:02.594258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.250 [2024-07-26 04:00:02.594325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:48.250 [2024-07-26 04:00:02.594350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.243 ms 00:31:48.250 [2024-07-26 04:00:02.594364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.250 [2024-07-26 04:00:02.594509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.250 [2024-07-26 04:00:02.594531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:48.250 [2024-07-26 04:00:02.594551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:31:48.250 [2024-07-26 04:00:02.594564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.250 [2024-07-26 04:00:02.633755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.250 [2024-07-26 04:00:02.633833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:48.250 [2024-07-26 04:00:02.633860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.125 ms 00:31:48.250 [2024-07-26 04:00:02.633873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.250 [2024-07-26 04:00:02.633948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.250 [2024-07-26 04:00:02.633965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:48.250 [2024-07-26 04:00:02.633987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:48.250 [2024-07-26 04:00:02.633999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.250 [2024-07-26 04:00:02.634461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.250 [2024-07-26 04:00:02.634499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:48.250 [2024-07-26 04:00:02.634518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.347 ms 00:31:48.250 [2024-07-26 04:00:02.634531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.250 [2024-07-26 04:00:02.634598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.250 [2024-07-26 04:00:02.634621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:48.250 [2024-07-26 04:00:02.634646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:31:48.250 [2024-07-26 04:00:02.634668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.250 [2024-07-26 04:00:02.652260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.250 [2024-07-26 04:00:02.652332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:48.250 [2024-07-26 04:00:02.652358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.557 ms 00:31:48.250 [2024-07-26 04:00:02.652371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.250 [2024-07-26 04:00:02.666092] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:48.250 [2024-07-26 04:00:02.667121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.250 [2024-07-26 04:00:02.667164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:48.250 [2024-07-26 04:00:02.667186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.606 ms 00:31:48.250 [2024-07-26 04:00:02.667201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.250 [2024-07-26 04:00:02.701438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.250 [2024-07-26 04:00:02.701518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:31:48.250 [2024-07-26 04:00:02.701541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.174 ms 00:31:48.250 [2024-07-26 04:00:02.701556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.250 [2024-07-26 04:00:02.701705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.250 [2024-07-26 04:00:02.701739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:48.250 [2024-07-26 04:00:02.701763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.064 ms 00:31:48.250 [2024-07-26 04:00:02.701897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.250 [2024-07-26 04:00:02.734462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.250 [2024-07-26 04:00:02.734553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:31:48.250 [2024-07-26 04:00:02.734577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.455 ms 00:31:48.250 [2024-07-26 04:00:02.734596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.250 [2024-07-26 04:00:02.766895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.251 [2024-07-26 04:00:02.766990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:31:48.251 [2024-07-26 04:00:02.767013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.216 ms 00:31:48.251 [2024-07-26 04:00:02.767029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.251 [2024-07-26 04:00:02.767868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.251 [2024-07-26 04:00:02.767910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:48.251 [2024-07-26 04:00:02.767930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.732 ms 00:31:48.251 [2024-07-26 04:00:02.767944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.251 [2024-07-26 04:00:02.862302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.251 [2024-07-26 04:00:02.862390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:31:48.251 [2024-07-26 04:00:02.862412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 94.259 ms 00:31:48.251 [2024-07-26 04:00:02.862431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.251 [2024-07-26 04:00:02.896145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.251 [2024-07-26 04:00:02.896221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:31:48.251 [2024-07-26 04:00:02.896245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.619 ms 00:31:48.251 [2024-07-26 04:00:02.896260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.251 [2024-07-26 04:00:02.928957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.251 [2024-07-26 04:00:02.929043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:31:48.251 [2024-07-26 04:00:02.929077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.632 ms 00:31:48.251 [2024-07-26 04:00:02.929093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.251 [2024-07-26 04:00:02.963717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.251 [2024-07-26 04:00:02.963789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:31:48.251 [2024-07-26 04:00:02.963812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.544 ms 00:31:48.251 [2024-07-26 04:00:02.963851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.251 [2024-07-26 04:00:02.963970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.251 [2024-07-26 04:00:02.964008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:48.251 [2024-07-26 04:00:02.964033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:31:48.251 [2024-07-26 04:00:02.964067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.251 [2024-07-26 04:00:02.964221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.251 [2024-07-26 04:00:02.964263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:48.251 [2024-07-26 04:00:02.964279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:31:48.251 [2024-07-26 04:00:02.964294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.251 [2024-07-26 04:00:02.965453] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2431.194 ms, result 0 00:31:48.251 { 00:31:48.251 "name": "ftl", 00:31:48.251 "uuid": "217ba5db-72dd-41a3-b600-c765b1c16a47" 00:31:48.251 } 00:31:48.251 04:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:31:48.509 [2024-07-26 04:00:03.252650] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:48.509 04:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:31:48.765 04:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:31:49.023 [2024-07-26 04:00:03.717218] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:49.023 04:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:31:49.281 [2024-07-26 04:00:03.954942] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:49.281 04:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:31:49.539 04:00:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:31:49.539 04:00:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:31:49.540 04:00:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:31:49.540 04:00:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:31:49.540 04:00:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:31:49.540 04:00:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:31:49.540 04:00:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:31:49.540 04:00:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:31:49.540 04:00:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:31:49.540 04:00:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:49.540 Fill FTL, iteration 1 00:31:49.540 04:00:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:31:49.540 04:00:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:31:49.540 04:00:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:49.540 04:00:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:49.540 04:00:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:49.540 04:00:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:31:49.540 04:00:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=86266 00:31:49.540 04:00:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:31:49.540 04:00:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:31:49.540 04:00:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 86266 /var/tmp/spdk.tgt.sock 00:31:49.540 04:00:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 86266 ']' 00:31:49.540 04:00:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:31:49.540 04:00:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:49.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:31:49.540 04:00:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:31:49.540 04:00:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:49.540 04:00:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:49.817 [2024-07-26 04:00:04.456537] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:31:49.817 [2024-07-26 04:00:04.457288] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86266 ] 00:31:49.817 [2024-07-26 04:00:04.624473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:50.076 [2024-07-26 04:00:04.810934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:51.025 04:00:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:51.025 04:00:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:31:51.025 04:00:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:31:51.025 ftln1 00:31:51.321 04:00:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:31:51.321 04:00:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:31:51.582 04:00:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:31:51.582 04:00:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 86266 00:31:51.582 04:00:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 86266 ']' 00:31:51.582 04:00:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 86266 00:31:51.582 04:00:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:31:51.582 04:00:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:51.582 04:00:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86266 00:31:51.582 04:00:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:51.582 killing process with pid 86266 00:31:51.582 04:00:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:51.582 04:00:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86266' 00:31:51.582 04:00:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 86266 00:31:51.582 04:00:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 86266 00:31:53.496 04:00:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:31:53.496 04:00:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:31:53.496 [2024-07-26 04:00:08.360877] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:31:53.496 [2024-07-26 04:00:08.361064] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86319 ] 00:31:53.755 [2024-07-26 04:00:08.530121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:54.014 [2024-07-26 04:00:08.716149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:00.597  Copying: 212/1024 [MB] (212 MBps) Copying: 422/1024 [MB] (210 MBps) Copying: 634/1024 [MB] (212 MBps) Copying: 833/1024 [MB] (199 MBps) Copying: 1024/1024 [MB] (average 204 MBps) 00:32:00.597 00:32:00.597 Calculate MD5 checksum, iteration 1 00:32:00.597 04:00:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:32:00.597 04:00:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:32:00.597 04:00:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:00.597 04:00:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:00.597 04:00:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:00.597 04:00:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:00.597 04:00:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:00.597 04:00:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:00.597 [2024-07-26 04:00:15.348448] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:32:00.597 [2024-07-26 04:00:15.348621] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86389 ] 00:32:00.856 [2024-07-26 04:00:15.520161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:00.856 [2024-07-26 04:00:15.742796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:04.949  Copying: 383/1024 [MB] (383 MBps) Copying: 831/1024 [MB] (448 MBps) Copying: 1024/1024 [MB] (average 421 MBps) 00:32:04.949 00:32:04.949 04:00:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:32:04.949 04:00:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:06.848 04:00:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:32:07.106 Fill FTL, iteration 2 00:32:07.106 04:00:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=63a1da9796ea641762635676362212e5 00:32:07.106 04:00:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:32:07.106 04:00:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:32:07.106 04:00:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:32:07.106 04:00:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:32:07.106 04:00:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:07.106 04:00:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:07.106 04:00:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:07.106 04:00:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:07.106 04:00:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:32:07.106 [2024-07-26 04:00:21.838318] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:32:07.106 [2024-07-26 04:00:21.838471] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86456 ] 00:32:07.106 [2024-07-26 04:00:22.001220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:07.363 [2024-07-26 04:00:22.187479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:14.628  Copying: 188/1024 [MB] (188 MBps) Copying: 378/1024 [MB] (190 MBps) Copying: 542/1024 [MB] (164 MBps) Copying: 744/1024 [MB] (202 MBps) Copying: 943/1024 [MB] (199 MBps) Copying: 1024/1024 [MB] (average 189 MBps) 00:32:14.628 00:32:14.628 Calculate MD5 checksum, iteration 2 00:32:14.628 04:00:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:32:14.628 04:00:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:32:14.628 04:00:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:14.628 04:00:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:14.628 04:00:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:14.628 04:00:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:14.628 04:00:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:14.628 04:00:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:14.628 [2024-07-26 04:00:29.226210] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:32:14.628 [2024-07-26 04:00:29.226357] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86531 ] 00:32:14.628 [2024-07-26 04:00:29.391002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:14.887 [2024-07-26 04:00:29.581325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:19.012  Copying: 516/1024 [MB] (516 MBps) Copying: 961/1024 [MB] (445 MBps) Copying: 1024/1024 [MB] (average 480 MBps) 00:32:19.012 00:32:19.012 04:00:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:32:19.012 04:00:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:21.543 04:00:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:32:21.543 04:00:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=8851e8c5c51382b764139cd64483e216 00:32:21.543 04:00:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:32:21.543 04:00:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:32:21.543 04:00:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:32:21.543 [2024-07-26 04:00:36.348717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.543 [2024-07-26 04:00:36.348785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:21.543 [2024-07-26 04:00:36.348808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:32:21.543 [2024-07-26 04:00:36.348845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.543 [2024-07-26 04:00:36.348888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.543 [2024-07-26 04:00:36.348906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:21.543 [2024-07-26 04:00:36.348918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:21.543 [2024-07-26 04:00:36.348930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.543 [2024-07-26 04:00:36.348972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.543 [2024-07-26 04:00:36.348986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:21.543 [2024-07-26 04:00:36.348999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:21.543 [2024-07-26 04:00:36.349022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.543 [2024-07-26 04:00:36.349104] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.376 ms, result 0 00:32:21.543 true 00:32:21.543 04:00:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:21.801 { 00:32:21.801 "name": "ftl", 00:32:21.801 "properties": [ 00:32:21.801 { 00:32:21.801 "name": "superblock_version", 00:32:21.801 "value": 5, 00:32:21.801 "read-only": true 00:32:21.801 }, 00:32:21.801 { 00:32:21.801 "name": "base_device", 00:32:21.801 "bands": [ 00:32:21.801 { 00:32:21.801 "id": 0, 00:32:21.801 "state": "FREE", 00:32:21.801 "validity": 0.0 00:32:21.801 }, 00:32:21.801 { 00:32:21.801 "id": 1, 00:32:21.801 "state": "FREE", 00:32:21.801 "validity": 0.0 00:32:21.801 }, 00:32:21.801 { 00:32:21.801 "id": 2, 00:32:21.801 "state": "FREE", 00:32:21.801 "validity": 0.0 00:32:21.801 }, 00:32:21.801 { 00:32:21.801 "id": 3, 00:32:21.801 "state": "FREE", 00:32:21.801 "validity": 0.0 00:32:21.801 }, 00:32:21.801 { 00:32:21.801 "id": 4, 00:32:21.801 "state": "FREE", 00:32:21.801 "validity": 0.0 00:32:21.801 }, 00:32:21.801 { 00:32:21.801 "id": 5, 00:32:21.801 "state": "FREE", 00:32:21.801 "validity": 0.0 00:32:21.801 }, 00:32:21.801 { 00:32:21.801 "id": 6, 00:32:21.801 "state": "FREE", 00:32:21.801 "validity": 0.0 00:32:21.801 }, 00:32:21.801 { 00:32:21.801 "id": 7, 00:32:21.801 "state": "FREE", 00:32:21.801 "validity": 0.0 00:32:21.801 }, 00:32:21.801 { 00:32:21.801 "id": 8, 00:32:21.801 "state": "FREE", 00:32:21.801 "validity": 0.0 00:32:21.801 }, 00:32:21.801 { 00:32:21.801 "id": 9, 00:32:21.801 "state": "FREE", 00:32:21.801 "validity": 0.0 00:32:21.801 }, 00:32:21.801 { 00:32:21.801 "id": 10, 00:32:21.801 "state": "FREE", 00:32:21.801 "validity": 0.0 00:32:21.801 }, 00:32:21.801 { 00:32:21.801 "id": 11, 00:32:21.802 "state": "FREE", 00:32:21.802 "validity": 0.0 00:32:21.802 }, 00:32:21.802 { 00:32:21.802 "id": 12, 00:32:21.802 "state": "FREE", 00:32:21.802 "validity": 0.0 00:32:21.802 }, 00:32:21.802 { 00:32:21.802 "id": 13, 00:32:21.802 "state": "FREE", 00:32:21.802 "validity": 0.0 00:32:21.802 }, 00:32:21.802 { 00:32:21.802 "id": 14, 00:32:21.802 "state": "FREE", 00:32:21.802 "validity": 0.0 00:32:21.802 }, 00:32:21.802 { 00:32:21.802 "id": 15, 00:32:21.802 "state": "FREE", 00:32:21.802 "validity": 0.0 00:32:21.802 }, 00:32:21.802 { 00:32:21.802 "id": 16, 00:32:21.802 "state": "FREE", 00:32:21.802 "validity": 0.0 00:32:21.802 }, 00:32:21.802 { 00:32:21.802 "id": 17, 00:32:21.802 "state": "FREE", 00:32:21.802 "validity": 0.0 00:32:21.802 } 00:32:21.802 ], 00:32:21.802 "read-only": true 00:32:21.802 }, 00:32:21.802 { 00:32:21.802 "name": "cache_device", 00:32:21.802 "type": "bdev", 00:32:21.802 "chunks": [ 00:32:21.802 { 00:32:21.802 "id": 0, 00:32:21.802 "state": "INACTIVE", 00:32:21.802 "utilization": 0.0 00:32:21.802 }, 00:32:21.802 { 00:32:21.802 "id": 1, 00:32:21.802 "state": "CLOSED", 00:32:21.802 "utilization": 1.0 00:32:21.802 }, 00:32:21.802 { 00:32:21.802 "id": 2, 00:32:21.802 "state": "CLOSED", 00:32:21.802 "utilization": 1.0 00:32:21.802 }, 00:32:21.802 { 00:32:21.802 "id": 3, 00:32:21.802 "state": "OPEN", 00:32:21.802 "utilization": 0.001953125 00:32:21.802 }, 00:32:21.802 { 00:32:21.802 "id": 4, 00:32:21.802 "state": "OPEN", 00:32:21.802 "utilization": 0.0 00:32:21.802 } 00:32:21.802 ], 00:32:21.802 "read-only": true 00:32:21.802 }, 00:32:21.802 { 00:32:21.802 "name": "verbose_mode", 00:32:21.802 "value": true, 00:32:21.802 "unit": "", 00:32:21.802 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:32:21.802 }, 00:32:21.802 { 00:32:21.802 "name": "prep_upgrade_on_shutdown", 00:32:21.802 "value": false, 00:32:21.802 "unit": "", 00:32:21.802 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:32:21.802 } 00:32:21.802 ] 00:32:21.802 } 00:32:21.802 04:00:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:32:22.071 [2024-07-26 04:00:36.937476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:22.071 [2024-07-26 04:00:36.937544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:22.071 [2024-07-26 04:00:36.937567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:32:22.071 [2024-07-26 04:00:36.937579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:22.071 [2024-07-26 04:00:36.937615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:22.071 [2024-07-26 04:00:36.937631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:22.071 [2024-07-26 04:00:36.937644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:22.071 [2024-07-26 04:00:36.937655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:22.071 [2024-07-26 04:00:36.937683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:22.071 [2024-07-26 04:00:36.937697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:22.071 [2024-07-26 04:00:36.937718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:22.071 [2024-07-26 04:00:36.937729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:22.071 [2024-07-26 04:00:36.937804] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.319 ms, result 0 00:32:22.071 true 00:32:22.071 04:00:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:32:22.071 04:00:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:22.071 04:00:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:32:22.638 04:00:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:32:22.638 04:00:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:32:22.638 04:00:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:32:22.638 [2024-07-26 04:00:37.478125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:22.638 [2024-07-26 04:00:37.478192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:22.638 [2024-07-26 04:00:37.478214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:32:22.638 [2024-07-26 04:00:37.478226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:22.638 [2024-07-26 04:00:37.478262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:22.638 [2024-07-26 04:00:37.478278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:22.638 [2024-07-26 04:00:37.478290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:22.638 [2024-07-26 04:00:37.478301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:22.638 [2024-07-26 04:00:37.478328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:22.638 [2024-07-26 04:00:37.478354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:22.638 [2024-07-26 04:00:37.478367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:22.638 [2024-07-26 04:00:37.478378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:22.638 [2024-07-26 04:00:37.478456] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.317 ms, result 0 00:32:22.638 true 00:32:22.638 04:00:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:22.906 { 00:32:22.906 "name": "ftl", 00:32:22.906 "properties": [ 00:32:22.906 { 00:32:22.906 "name": "superblock_version", 00:32:22.906 "value": 5, 00:32:22.906 "read-only": true 00:32:22.906 }, 00:32:22.906 { 00:32:22.906 "name": "base_device", 00:32:22.906 "bands": [ 00:32:22.906 { 00:32:22.906 "id": 0, 00:32:22.906 "state": "FREE", 00:32:22.906 "validity": 0.0 00:32:22.906 }, 00:32:22.906 { 00:32:22.906 "id": 1, 00:32:22.906 "state": "FREE", 00:32:22.906 "validity": 0.0 00:32:22.906 }, 00:32:22.906 { 00:32:22.906 "id": 2, 00:32:22.906 "state": "FREE", 00:32:22.906 "validity": 0.0 00:32:22.906 }, 00:32:22.906 { 00:32:22.906 "id": 3, 00:32:22.906 "state": "FREE", 00:32:22.906 "validity": 0.0 00:32:22.906 }, 00:32:22.906 { 00:32:22.906 "id": 4, 00:32:22.906 "state": "FREE", 00:32:22.906 "validity": 0.0 00:32:22.906 }, 00:32:22.906 { 00:32:22.906 "id": 5, 00:32:22.906 "state": "FREE", 00:32:22.906 "validity": 0.0 00:32:22.906 }, 00:32:22.906 { 00:32:22.906 "id": 6, 00:32:22.906 "state": "FREE", 00:32:22.906 "validity": 0.0 00:32:22.906 }, 00:32:22.906 { 00:32:22.906 "id": 7, 00:32:22.906 "state": "FREE", 00:32:22.906 "validity": 0.0 00:32:22.906 }, 00:32:22.906 { 00:32:22.906 "id": 8, 00:32:22.906 "state": "FREE", 00:32:22.906 "validity": 0.0 00:32:22.906 }, 00:32:22.906 { 00:32:22.906 "id": 9, 00:32:22.906 "state": "FREE", 00:32:22.906 "validity": 0.0 00:32:22.906 }, 00:32:22.906 { 00:32:22.906 "id": 10, 00:32:22.906 "state": "FREE", 00:32:22.906 "validity": 0.0 00:32:22.906 }, 00:32:22.906 { 00:32:22.906 "id": 11, 00:32:22.906 "state": "FREE", 00:32:22.906 "validity": 0.0 00:32:22.906 }, 00:32:22.906 { 00:32:22.906 "id": 12, 00:32:22.906 "state": "FREE", 00:32:22.906 "validity": 0.0 00:32:22.906 }, 00:32:22.906 { 00:32:22.906 "id": 13, 00:32:22.906 "state": "FREE", 00:32:22.906 "validity": 0.0 00:32:22.906 }, 00:32:22.906 { 00:32:22.906 "id": 14, 00:32:22.906 "state": "FREE", 00:32:22.906 "validity": 0.0 00:32:22.906 }, 00:32:22.906 { 00:32:22.906 "id": 15, 00:32:22.906 "state": "FREE", 00:32:22.906 "validity": 0.0 00:32:22.906 }, 00:32:22.906 { 00:32:22.906 "id": 16, 00:32:22.906 "state": "FREE", 00:32:22.906 "validity": 0.0 00:32:22.906 }, 00:32:22.906 { 00:32:22.906 "id": 17, 00:32:22.906 "state": "FREE", 00:32:22.906 "validity": 0.0 00:32:22.906 } 00:32:22.906 ], 00:32:22.906 "read-only": true 00:32:22.906 }, 00:32:22.906 { 00:32:22.906 "name": "cache_device", 00:32:22.906 "type": "bdev", 00:32:22.906 "chunks": [ 00:32:22.906 { 00:32:22.906 "id": 0, 00:32:22.906 "state": "INACTIVE", 00:32:22.906 "utilization": 0.0 00:32:22.906 }, 00:32:22.906 { 00:32:22.906 "id": 1, 00:32:22.906 "state": "CLOSED", 00:32:22.906 "utilization": 1.0 00:32:22.906 }, 00:32:22.906 { 00:32:22.906 "id": 2, 00:32:22.906 "state": "CLOSED", 00:32:22.906 "utilization": 1.0 00:32:22.906 }, 00:32:22.906 { 00:32:22.906 "id": 3, 00:32:22.906 "state": "OPEN", 00:32:22.906 "utilization": 0.001953125 00:32:22.906 }, 00:32:22.906 { 00:32:22.906 "id": 4, 00:32:22.906 "state": "OPEN", 00:32:22.906 "utilization": 0.0 00:32:22.906 } 00:32:22.906 ], 00:32:22.906 "read-only": true 00:32:22.906 }, 00:32:22.906 { 00:32:22.906 "name": "verbose_mode", 00:32:22.906 "value": true, 00:32:22.906 "unit": "", 00:32:22.906 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:32:22.906 }, 00:32:22.906 { 00:32:22.906 "name": "prep_upgrade_on_shutdown", 00:32:22.907 "value": true, 00:32:22.907 "unit": "", 00:32:22.907 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:32:22.907 } 00:32:22.907 ] 00:32:22.907 } 00:32:22.907 04:00:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:32:22.907 04:00:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 86149 ]] 00:32:22.907 04:00:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 86149 00:32:22.907 04:00:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 86149 ']' 00:32:22.907 04:00:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 86149 00:32:22.907 04:00:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:32:22.907 04:00:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:22.907 04:00:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86149 00:32:23.167 killing process with pid 86149 00:32:23.167 04:00:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:23.167 04:00:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:23.167 04:00:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86149' 00:32:23.167 04:00:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 86149 00:32:23.167 04:00:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 86149 00:32:24.101 [2024-07-26 04:00:38.805547] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:32:24.101 [2024-07-26 04:00:38.823363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:24.101 [2024-07-26 04:00:38.823433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:32:24.101 [2024-07-26 04:00:38.823455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:24.101 [2024-07-26 04:00:38.823468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:24.101 [2024-07-26 04:00:38.823500] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:32:24.101 [2024-07-26 04:00:38.827019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:24.101 [2024-07-26 04:00:38.827175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:32:24.101 [2024-07-26 04:00:38.827312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.494 ms 00:32:24.101 [2024-07-26 04:00:38.827368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.069 [2024-07-26 04:00:47.505565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:34.069 [2024-07-26 04:00:47.505651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:32:34.069 [2024-07-26 04:00:47.505674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8678.184 ms 00:32:34.069 [2024-07-26 04:00:47.505687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.069 [2024-07-26 04:00:47.507036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:34.069 [2024-07-26 04:00:47.507068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:32:34.069 [2024-07-26 04:00:47.507095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.324 ms 00:32:34.069 [2024-07-26 04:00:47.507108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.069 [2024-07-26 04:00:47.508470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:34.069 [2024-07-26 04:00:47.508506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:32:34.069 [2024-07-26 04:00:47.508530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.195 ms 00:32:34.069 [2024-07-26 04:00:47.508542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.069 [2024-07-26 04:00:47.521355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:34.069 [2024-07-26 04:00:47.521414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:32:34.069 [2024-07-26 04:00:47.521431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.760 ms 00:32:34.069 [2024-07-26 04:00:47.521443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.069 [2024-07-26 04:00:47.529286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:34.069 [2024-07-26 04:00:47.529350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:32:34.069 [2024-07-26 04:00:47.529370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.795 ms 00:32:34.069 [2024-07-26 04:00:47.529382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.069 [2024-07-26 04:00:47.529527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:34.069 [2024-07-26 04:00:47.529550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:32:34.069 [2024-07-26 04:00:47.529564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.091 ms 00:32:34.069 [2024-07-26 04:00:47.529591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.069 [2024-07-26 04:00:47.542388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:34.069 [2024-07-26 04:00:47.542442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:32:34.069 [2024-07-26 04:00:47.542460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.770 ms 00:32:34.069 [2024-07-26 04:00:47.542471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.069 [2024-07-26 04:00:47.556888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:34.069 [2024-07-26 04:00:47.556957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:32:34.069 [2024-07-26 04:00:47.556976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.366 ms 00:32:34.069 [2024-07-26 04:00:47.556988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.069 [2024-07-26 04:00:47.569693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:34.069 [2024-07-26 04:00:47.569757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:32:34.069 [2024-07-26 04:00:47.569776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.648 ms 00:32:34.069 [2024-07-26 04:00:47.569787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.069 [2024-07-26 04:00:47.582614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:34.069 [2024-07-26 04:00:47.582698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:32:34.069 [2024-07-26 04:00:47.582719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.712 ms 00:32:34.069 [2024-07-26 04:00:47.582730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.069 [2024-07-26 04:00:47.582785] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:32:34.069 [2024-07-26 04:00:47.582813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:34.069 [2024-07-26 04:00:47.582842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:32:34.069 [2024-07-26 04:00:47.582856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:32:34.069 [2024-07-26 04:00:47.582870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:34.069 [2024-07-26 04:00:47.582882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:34.069 [2024-07-26 04:00:47.582904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:34.069 [2024-07-26 04:00:47.582916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:34.069 [2024-07-26 04:00:47.582928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:34.069 [2024-07-26 04:00:47.582940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:34.069 [2024-07-26 04:00:47.582952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:34.069 [2024-07-26 04:00:47.582964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:34.069 [2024-07-26 04:00:47.582975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:34.069 [2024-07-26 04:00:47.582987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:34.069 [2024-07-26 04:00:47.583026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:34.069 [2024-07-26 04:00:47.583038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:34.069 [2024-07-26 04:00:47.583050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:34.069 [2024-07-26 04:00:47.583062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:34.069 [2024-07-26 04:00:47.583073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:34.070 [2024-07-26 04:00:47.583102] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:32:34.070 [2024-07-26 04:00:47.583115] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 217ba5db-72dd-41a3-b600-c765b1c16a47 00:32:34.070 [2024-07-26 04:00:47.583127] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:32:34.070 [2024-07-26 04:00:47.583139] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:32:34.070 [2024-07-26 04:00:47.583157] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:32:34.070 [2024-07-26 04:00:47.583169] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:32:34.070 [2024-07-26 04:00:47.583180] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:32:34.070 [2024-07-26 04:00:47.583191] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:32:34.070 [2024-07-26 04:00:47.583203] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:32:34.070 [2024-07-26 04:00:47.583213] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:32:34.070 [2024-07-26 04:00:47.583224] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:32:34.070 [2024-07-26 04:00:47.583237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:34.070 [2024-07-26 04:00:47.583249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:32:34.070 [2024-07-26 04:00:47.583263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.454 ms 00:32:34.070 [2024-07-26 04:00:47.583275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.070 [2024-07-26 04:00:47.600347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:34.070 [2024-07-26 04:00:47.600434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:32:34.070 [2024-07-26 04:00:47.600457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.040 ms 00:32:34.070 [2024-07-26 04:00:47.600468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.070 [2024-07-26 04:00:47.600997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:34.070 [2024-07-26 04:00:47.601018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:32:34.070 [2024-07-26 04:00:47.601032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.452 ms 00:32:34.070 [2024-07-26 04:00:47.601044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.070 [2024-07-26 04:00:47.652828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:34.070 [2024-07-26 04:00:47.652890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:34.070 [2024-07-26 04:00:47.652910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:34.070 [2024-07-26 04:00:47.652922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.070 [2024-07-26 04:00:47.652981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:34.070 [2024-07-26 04:00:47.652997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:34.070 [2024-07-26 04:00:47.653010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:34.070 [2024-07-26 04:00:47.653027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.070 [2024-07-26 04:00:47.653144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:34.070 [2024-07-26 04:00:47.653170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:34.070 [2024-07-26 04:00:47.653189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:34.070 [2024-07-26 04:00:47.653201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.070 [2024-07-26 04:00:47.653235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:34.070 [2024-07-26 04:00:47.653250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:34.070 [2024-07-26 04:00:47.653261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:34.070 [2024-07-26 04:00:47.653272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.070 [2024-07-26 04:00:47.752149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:34.070 [2024-07-26 04:00:47.752232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:34.070 [2024-07-26 04:00:47.752253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:34.070 [2024-07-26 04:00:47.752266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.070 [2024-07-26 04:00:47.836847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:34.070 [2024-07-26 04:00:47.836921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:34.070 [2024-07-26 04:00:47.836941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:34.070 [2024-07-26 04:00:47.836954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.070 [2024-07-26 04:00:47.837086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:34.070 [2024-07-26 04:00:47.837106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:34.070 [2024-07-26 04:00:47.837128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:34.070 [2024-07-26 04:00:47.837140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.070 [2024-07-26 04:00:47.837202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:34.070 [2024-07-26 04:00:47.837220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:34.070 [2024-07-26 04:00:47.837232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:34.070 [2024-07-26 04:00:47.837243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.070 [2024-07-26 04:00:47.837372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:34.070 [2024-07-26 04:00:47.837391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:34.070 [2024-07-26 04:00:47.837404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:34.070 [2024-07-26 04:00:47.837422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.070 [2024-07-26 04:00:47.837474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:34.070 [2024-07-26 04:00:47.837491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:32:34.070 [2024-07-26 04:00:47.837503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:34.070 [2024-07-26 04:00:47.837514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.070 [2024-07-26 04:00:47.837562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:34.070 [2024-07-26 04:00:47.837578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:34.070 [2024-07-26 04:00:47.837590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:34.070 [2024-07-26 04:00:47.837607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.070 [2024-07-26 04:00:47.837662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:34.070 [2024-07-26 04:00:47.837679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:34.070 [2024-07-26 04:00:47.837691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:34.070 [2024-07-26 04:00:47.837702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.070 [2024-07-26 04:00:47.837865] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 9014.516 ms, result 0 00:32:35.993 04:00:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:32:35.993 04:00:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:32:35.993 04:00:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:32:35.993 04:00:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:32:35.993 04:00:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:35.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:35.993 04:00:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=86750 00:32:35.993 04:00:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:32:35.993 04:00:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:35.993 04:00:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 86750 00:32:35.993 04:00:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 86750 ']' 00:32:35.993 04:00:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:35.993 04:00:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:35.993 04:00:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:35.993 04:00:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:35.993 04:00:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:35.993 [2024-07-26 04:00:50.768555] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:32:35.993 [2024-07-26 04:00:50.768736] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86750 ] 00:32:36.251 [2024-07-26 04:00:50.940231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:36.508 [2024-07-26 04:00:51.171880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:37.463 [2024-07-26 04:00:52.014939] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:37.463 [2024-07-26 04:00:52.015022] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:37.464 [2024-07-26 04:00:52.162779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:37.464 [2024-07-26 04:00:52.162866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:32:37.464 [2024-07-26 04:00:52.162890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:37.464 [2024-07-26 04:00:52.162903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:37.464 [2024-07-26 04:00:52.162976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:37.464 [2024-07-26 04:00:52.162996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:37.464 [2024-07-26 04:00:52.163009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:32:37.464 [2024-07-26 04:00:52.163020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:37.464 [2024-07-26 04:00:52.163059] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:32:37.464 [2024-07-26 04:00:52.164031] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:32:37.464 [2024-07-26 04:00:52.164066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:37.464 [2024-07-26 04:00:52.164081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:37.464 [2024-07-26 04:00:52.164094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.019 ms 00:32:37.464 [2024-07-26 04:00:52.164111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:37.464 [2024-07-26 04:00:52.165191] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:32:37.464 [2024-07-26 04:00:52.181812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:37.464 [2024-07-26 04:00:52.181907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:32:37.464 [2024-07-26 04:00:52.181932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.621 ms 00:32:37.464 [2024-07-26 04:00:52.181945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:37.464 [2024-07-26 04:00:52.182055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:37.464 [2024-07-26 04:00:52.182076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:32:37.464 [2024-07-26 04:00:52.182089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:32:37.464 [2024-07-26 04:00:52.182101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:37.464 [2024-07-26 04:00:52.186548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:37.464 [2024-07-26 04:00:52.186597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:37.464 [2024-07-26 04:00:52.186633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.317 ms 00:32:37.464 [2024-07-26 04:00:52.186645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:37.464 [2024-07-26 04:00:52.186738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:37.464 [2024-07-26 04:00:52.186759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:37.464 [2024-07-26 04:00:52.186776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:32:37.464 [2024-07-26 04:00:52.186788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:37.464 [2024-07-26 04:00:52.186888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:37.464 [2024-07-26 04:00:52.186909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:32:37.464 [2024-07-26 04:00:52.186923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:32:37.464 [2024-07-26 04:00:52.186934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:37.464 [2024-07-26 04:00:52.186973] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:32:37.464 [2024-07-26 04:00:52.191340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:37.464 [2024-07-26 04:00:52.191383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:37.464 [2024-07-26 04:00:52.191400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.377 ms 00:32:37.464 [2024-07-26 04:00:52.191412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:37.464 [2024-07-26 04:00:52.191453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:37.464 [2024-07-26 04:00:52.191469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:32:37.464 [2024-07-26 04:00:52.191486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:37.464 [2024-07-26 04:00:52.191497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:37.464 [2024-07-26 04:00:52.191552] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:32:37.464 [2024-07-26 04:00:52.191584] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:32:37.464 [2024-07-26 04:00:52.191631] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:32:37.464 [2024-07-26 04:00:52.191653] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:32:37.464 [2024-07-26 04:00:52.191761] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:32:37.464 [2024-07-26 04:00:52.191783] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:32:37.464 [2024-07-26 04:00:52.191798] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:32:37.464 [2024-07-26 04:00:52.191813] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:32:37.464 [2024-07-26 04:00:52.191863] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:32:37.464 [2024-07-26 04:00:52.191876] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:32:37.464 [2024-07-26 04:00:52.191888] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:32:37.464 [2024-07-26 04:00:52.191899] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:32:37.464 [2024-07-26 04:00:52.191910] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:32:37.464 [2024-07-26 04:00:52.191922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:37.464 [2024-07-26 04:00:52.191934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:32:37.464 [2024-07-26 04:00:52.191946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.374 ms 00:32:37.464 [2024-07-26 04:00:52.191963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:37.464 [2024-07-26 04:00:52.192063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:37.464 [2024-07-26 04:00:52.192086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:32:37.464 [2024-07-26 04:00:52.192098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:32:37.464 [2024-07-26 04:00:52.192109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:37.464 [2024-07-26 04:00:52.192260] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:32:37.464 [2024-07-26 04:00:52.192287] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:32:37.464 [2024-07-26 04:00:52.192301] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:37.464 [2024-07-26 04:00:52.192314] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:37.464 [2024-07-26 04:00:52.192332] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:32:37.464 [2024-07-26 04:00:52.192343] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:32:37.464 [2024-07-26 04:00:52.192353] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:32:37.464 [2024-07-26 04:00:52.192363] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:32:37.464 [2024-07-26 04:00:52.192374] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:32:37.464 [2024-07-26 04:00:52.192384] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:37.464 [2024-07-26 04:00:52.192394] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:32:37.464 [2024-07-26 04:00:52.192405] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:32:37.464 [2024-07-26 04:00:52.192415] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:37.464 [2024-07-26 04:00:52.192425] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:32:37.464 [2024-07-26 04:00:52.192435] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:32:37.464 [2024-07-26 04:00:52.192444] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:37.464 [2024-07-26 04:00:52.192455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:32:37.464 [2024-07-26 04:00:52.192465] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:32:37.464 [2024-07-26 04:00:52.192476] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:37.464 [2024-07-26 04:00:52.192487] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:32:37.464 [2024-07-26 04:00:52.192497] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:32:37.464 [2024-07-26 04:00:52.192507] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:37.464 [2024-07-26 04:00:52.192517] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:32:37.464 [2024-07-26 04:00:52.192527] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:32:37.464 [2024-07-26 04:00:52.192537] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:37.464 [2024-07-26 04:00:52.192547] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:32:37.464 [2024-07-26 04:00:52.192558] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:32:37.464 [2024-07-26 04:00:52.192567] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:37.464 [2024-07-26 04:00:52.192577] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:32:37.464 [2024-07-26 04:00:52.192587] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:32:37.464 [2024-07-26 04:00:52.192597] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:37.464 [2024-07-26 04:00:52.192607] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:32:37.464 [2024-07-26 04:00:52.192617] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:32:37.464 [2024-07-26 04:00:52.192627] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:37.464 [2024-07-26 04:00:52.192637] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:32:37.464 [2024-07-26 04:00:52.192647] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:32:37.464 [2024-07-26 04:00:52.192657] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:37.464 [2024-07-26 04:00:52.192667] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:32:37.464 [2024-07-26 04:00:52.192677] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:32:37.464 [2024-07-26 04:00:52.192687] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:37.465 [2024-07-26 04:00:52.192697] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:32:37.465 [2024-07-26 04:00:52.192707] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:32:37.465 [2024-07-26 04:00:52.192718] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:37.465 [2024-07-26 04:00:52.192728] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:32:37.465 [2024-07-26 04:00:52.192739] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:32:37.465 [2024-07-26 04:00:52.192750] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:37.465 [2024-07-26 04:00:52.192760] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:37.465 [2024-07-26 04:00:52.192772] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:32:37.465 [2024-07-26 04:00:52.192782] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:32:37.465 [2024-07-26 04:00:52.192792] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:32:37.465 [2024-07-26 04:00:52.192804] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:32:37.465 [2024-07-26 04:00:52.192849] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:32:37.465 [2024-07-26 04:00:52.192861] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:32:37.465 [2024-07-26 04:00:52.192873] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:32:37.465 [2024-07-26 04:00:52.192887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:37.465 [2024-07-26 04:00:52.192900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:32:37.465 [2024-07-26 04:00:52.192911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:32:37.465 [2024-07-26 04:00:52.192922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:32:37.465 [2024-07-26 04:00:52.192933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:32:37.465 [2024-07-26 04:00:52.192944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:32:37.465 [2024-07-26 04:00:52.192955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:32:37.465 [2024-07-26 04:00:52.192967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:32:37.465 [2024-07-26 04:00:52.192977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:32:37.465 [2024-07-26 04:00:52.192989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:32:37.465 [2024-07-26 04:00:52.193000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:32:37.465 [2024-07-26 04:00:52.193011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:32:37.465 [2024-07-26 04:00:52.193022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:32:37.465 [2024-07-26 04:00:52.193033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:32:37.465 [2024-07-26 04:00:52.193044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:32:37.465 [2024-07-26 04:00:52.193055] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:32:37.465 [2024-07-26 04:00:52.193068] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:37.465 [2024-07-26 04:00:52.193080] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:37.465 [2024-07-26 04:00:52.193091] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:32:37.465 [2024-07-26 04:00:52.193103] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:32:37.465 [2024-07-26 04:00:52.193114] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:32:37.465 [2024-07-26 04:00:52.193126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:37.465 [2024-07-26 04:00:52.193137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:32:37.465 [2024-07-26 04:00:52.193149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.933 ms 00:32:37.465 [2024-07-26 04:00:52.193165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:37.465 [2024-07-26 04:00:52.193227] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:32:37.465 [2024-07-26 04:00:52.193245] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:32:40.743 [2024-07-26 04:00:55.051070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.743 [2024-07-26 04:00:55.051155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:32:40.743 [2024-07-26 04:00:55.051180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2857.859 ms 00:32:40.743 [2024-07-26 04:00:55.051202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.743 [2024-07-26 04:00:55.083706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.743 [2024-07-26 04:00:55.083777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:40.743 [2024-07-26 04:00:55.083801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.211 ms 00:32:40.743 [2024-07-26 04:00:55.083833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.743 [2024-07-26 04:00:55.083988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.743 [2024-07-26 04:00:55.084010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:32:40.743 [2024-07-26 04:00:55.084024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:32:40.744 [2024-07-26 04:00:55.084035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.744 [2024-07-26 04:00:55.136762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.744 [2024-07-26 04:00:55.136869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:40.744 [2024-07-26 04:00:55.136900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 52.635 ms 00:32:40.744 [2024-07-26 04:00:55.136919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.744 [2024-07-26 04:00:55.137027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.744 [2024-07-26 04:00:55.137050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:40.744 [2024-07-26 04:00:55.137069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:32:40.744 [2024-07-26 04:00:55.137085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.744 [2024-07-26 04:00:55.137572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.744 [2024-07-26 04:00:55.137618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:40.744 [2024-07-26 04:00:55.137642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.348 ms 00:32:40.744 [2024-07-26 04:00:55.137659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.744 [2024-07-26 04:00:55.137753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.744 [2024-07-26 04:00:55.137778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:40.744 [2024-07-26 04:00:55.137797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:32:40.744 [2024-07-26 04:00:55.137814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.744 [2024-07-26 04:00:55.162069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.744 [2024-07-26 04:00:55.162156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:40.744 [2024-07-26 04:00:55.162186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.181 ms 00:32:40.744 [2024-07-26 04:00:55.162205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.744 [2024-07-26 04:00:55.187746] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:32:40.744 [2024-07-26 04:00:55.187877] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:32:40.744 [2024-07-26 04:00:55.187909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.744 [2024-07-26 04:00:55.187926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:32:40.744 [2024-07-26 04:00:55.187946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.434 ms 00:32:40.744 [2024-07-26 04:00:55.187960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.744 [2024-07-26 04:00:55.212121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.744 [2024-07-26 04:00:55.212233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:32:40.744 [2024-07-26 04:00:55.212261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.044 ms 00:32:40.744 [2024-07-26 04:00:55.212277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.744 [2024-07-26 04:00:55.233523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.744 [2024-07-26 04:00:55.233656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:32:40.744 [2024-07-26 04:00:55.233690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.111 ms 00:32:40.744 [2024-07-26 04:00:55.233710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.744 [2024-07-26 04:00:55.254110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.744 [2024-07-26 04:00:55.254206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:32:40.744 [2024-07-26 04:00:55.254232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.239 ms 00:32:40.744 [2024-07-26 04:00:55.254246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.744 [2024-07-26 04:00:55.255395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.744 [2024-07-26 04:00:55.255441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:32:40.744 [2024-07-26 04:00:55.255468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.886 ms 00:32:40.744 [2024-07-26 04:00:55.255483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.744 [2024-07-26 04:00:55.356895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.744 [2024-07-26 04:00:55.356990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:32:40.744 [2024-07-26 04:00:55.357018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 101.370 ms 00:32:40.744 [2024-07-26 04:00:55.357033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.744 [2024-07-26 04:00:55.372823] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:32:40.744 [2024-07-26 04:00:55.373932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.744 [2024-07-26 04:00:55.373981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:32:40.744 [2024-07-26 04:00:55.374015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.782 ms 00:32:40.744 [2024-07-26 04:00:55.374030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.744 [2024-07-26 04:00:55.374197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.744 [2024-07-26 04:00:55.374223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:32:40.744 [2024-07-26 04:00:55.374240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:32:40.744 [2024-07-26 04:00:55.374254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.744 [2024-07-26 04:00:55.374378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.744 [2024-07-26 04:00:55.374409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:32:40.744 [2024-07-26 04:00:55.374425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:32:40.744 [2024-07-26 04:00:55.374444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.744 [2024-07-26 04:00:55.374489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.744 [2024-07-26 04:00:55.374508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:32:40.744 [2024-07-26 04:00:55.374522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:32:40.744 [2024-07-26 04:00:55.374536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.744 [2024-07-26 04:00:55.374600] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:32:40.744 [2024-07-26 04:00:55.374623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.744 [2024-07-26 04:00:55.374636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:32:40.744 [2024-07-26 04:00:55.374651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:32:40.744 [2024-07-26 04:00:55.374664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.744 [2024-07-26 04:00:55.413603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.744 [2024-07-26 04:00:55.413682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:32:40.744 [2024-07-26 04:00:55.413709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.893 ms 00:32:40.744 [2024-07-26 04:00:55.413724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.744 [2024-07-26 04:00:55.413878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.744 [2024-07-26 04:00:55.413903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:32:40.744 [2024-07-26 04:00:55.413921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.072 ms 00:32:40.744 [2024-07-26 04:00:55.413944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.744 [2024-07-26 04:00:55.415519] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3252.095 ms, result 0 00:32:40.744 [2024-07-26 04:00:55.430189] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:40.744 [2024-07-26 04:00:55.446289] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:32:40.744 [2024-07-26 04:00:55.456532] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:40.744 04:00:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:40.744 04:00:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:32:40.744 04:00:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:40.744 04:00:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:32:40.744 04:00:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:32:41.003 [2024-07-26 04:00:55.761146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:41.003 [2024-07-26 04:00:55.761234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:41.003 [2024-07-26 04:00:55.761263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:32:41.003 [2024-07-26 04:00:55.761279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:41.003 [2024-07-26 04:00:55.761329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:41.003 [2024-07-26 04:00:55.761349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:41.003 [2024-07-26 04:00:55.761364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:41.003 [2024-07-26 04:00:55.761377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:41.003 [2024-07-26 04:00:55.761427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:41.003 [2024-07-26 04:00:55.761447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:41.003 [2024-07-26 04:00:55.761462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:32:41.003 [2024-07-26 04:00:55.761482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:41.003 [2024-07-26 04:00:55.761575] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.421 ms, result 0 00:32:41.003 true 00:32:41.003 04:00:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:41.261 { 00:32:41.261 "name": "ftl", 00:32:41.261 "properties": [ 00:32:41.261 { 00:32:41.261 "name": "superblock_version", 00:32:41.261 "value": 5, 00:32:41.261 "read-only": true 00:32:41.261 }, 00:32:41.261 { 00:32:41.261 "name": "base_device", 00:32:41.261 "bands": [ 00:32:41.261 { 00:32:41.261 "id": 0, 00:32:41.261 "state": "CLOSED", 00:32:41.261 "validity": 1.0 00:32:41.261 }, 00:32:41.261 { 00:32:41.261 "id": 1, 00:32:41.261 "state": "CLOSED", 00:32:41.261 "validity": 1.0 00:32:41.261 }, 00:32:41.261 { 00:32:41.261 "id": 2, 00:32:41.261 "state": "CLOSED", 00:32:41.261 "validity": 0.007843137254901933 00:32:41.261 }, 00:32:41.261 { 00:32:41.261 "id": 3, 00:32:41.261 "state": "FREE", 00:32:41.261 "validity": 0.0 00:32:41.261 }, 00:32:41.261 { 00:32:41.261 "id": 4, 00:32:41.261 "state": "FREE", 00:32:41.261 "validity": 0.0 00:32:41.261 }, 00:32:41.261 { 00:32:41.261 "id": 5, 00:32:41.261 "state": "FREE", 00:32:41.261 "validity": 0.0 00:32:41.261 }, 00:32:41.261 { 00:32:41.261 "id": 6, 00:32:41.261 "state": "FREE", 00:32:41.261 "validity": 0.0 00:32:41.261 }, 00:32:41.261 { 00:32:41.261 "id": 7, 00:32:41.261 "state": "FREE", 00:32:41.261 "validity": 0.0 00:32:41.261 }, 00:32:41.261 { 00:32:41.261 "id": 8, 00:32:41.261 "state": "FREE", 00:32:41.261 "validity": 0.0 00:32:41.261 }, 00:32:41.261 { 00:32:41.261 "id": 9, 00:32:41.261 "state": "FREE", 00:32:41.261 "validity": 0.0 00:32:41.261 }, 00:32:41.261 { 00:32:41.261 "id": 10, 00:32:41.262 "state": "FREE", 00:32:41.262 "validity": 0.0 00:32:41.262 }, 00:32:41.262 { 00:32:41.262 "id": 11, 00:32:41.262 "state": "FREE", 00:32:41.262 "validity": 0.0 00:32:41.262 }, 00:32:41.262 { 00:32:41.262 "id": 12, 00:32:41.262 "state": "FREE", 00:32:41.262 "validity": 0.0 00:32:41.262 }, 00:32:41.262 { 00:32:41.262 "id": 13, 00:32:41.262 "state": "FREE", 00:32:41.262 "validity": 0.0 00:32:41.262 }, 00:32:41.262 { 00:32:41.262 "id": 14, 00:32:41.262 "state": "FREE", 00:32:41.262 "validity": 0.0 00:32:41.262 }, 00:32:41.262 { 00:32:41.262 "id": 15, 00:32:41.262 "state": "FREE", 00:32:41.262 "validity": 0.0 00:32:41.262 }, 00:32:41.262 { 00:32:41.262 "id": 16, 00:32:41.262 "state": "FREE", 00:32:41.262 "validity": 0.0 00:32:41.262 }, 00:32:41.262 { 00:32:41.262 "id": 17, 00:32:41.262 "state": "FREE", 00:32:41.262 "validity": 0.0 00:32:41.262 } 00:32:41.262 ], 00:32:41.262 "read-only": true 00:32:41.262 }, 00:32:41.262 { 00:32:41.262 "name": "cache_device", 00:32:41.262 "type": "bdev", 00:32:41.262 "chunks": [ 00:32:41.262 { 00:32:41.262 "id": 0, 00:32:41.262 "state": "INACTIVE", 00:32:41.262 "utilization": 0.0 00:32:41.262 }, 00:32:41.262 { 00:32:41.262 "id": 1, 00:32:41.262 "state": "OPEN", 00:32:41.262 "utilization": 0.0 00:32:41.262 }, 00:32:41.262 { 00:32:41.262 "id": 2, 00:32:41.262 "state": "OPEN", 00:32:41.262 "utilization": 0.0 00:32:41.262 }, 00:32:41.262 { 00:32:41.262 "id": 3, 00:32:41.262 "state": "FREE", 00:32:41.262 "utilization": 0.0 00:32:41.262 }, 00:32:41.262 { 00:32:41.262 "id": 4, 00:32:41.262 "state": "FREE", 00:32:41.262 "utilization": 0.0 00:32:41.262 } 00:32:41.262 ], 00:32:41.262 "read-only": true 00:32:41.262 }, 00:32:41.262 { 00:32:41.262 "name": "verbose_mode", 00:32:41.262 "value": true, 00:32:41.262 "unit": "", 00:32:41.262 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:32:41.262 }, 00:32:41.262 { 00:32:41.262 "name": "prep_upgrade_on_shutdown", 00:32:41.262 "value": false, 00:32:41.262 "unit": "", 00:32:41.262 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:32:41.262 } 00:32:41.262 ] 00:32:41.262 } 00:32:41.262 04:00:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:32:41.262 04:00:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:41.262 04:00:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:32:41.828 04:00:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:32:41.828 04:00:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:32:41.828 04:00:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:32:41.828 04:00:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:41.828 04:00:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:32:41.828 Validate MD5 checksum, iteration 1 00:32:41.828 04:00:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:32:41.828 04:00:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:32:41.828 04:00:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:32:41.828 04:00:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:32:41.828 04:00:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:32:41.828 04:00:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:41.828 04:00:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:32:41.828 04:00:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:41.828 04:00:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:41.828 04:00:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:41.828 04:00:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:41.828 04:00:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:41.828 04:00:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:42.085 [2024-07-26 04:00:56.825495] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:32:42.085 [2024-07-26 04:00:56.825713] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86829 ] 00:32:42.343 [2024-07-26 04:00:57.009188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:42.343 [2024-07-26 04:00:57.234418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:47.078  Copying: 462/1024 [MB] (462 MBps) Copying: 943/1024 [MB] (481 MBps) Copying: 1024/1024 [MB] (average 469 MBps) 00:32:47.078 00:32:47.078 04:01:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:32:47.078 04:01:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:48.980 04:01:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:48.980 Validate MD5 checksum, iteration 2 00:32:48.980 04:01:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=63a1da9796ea641762635676362212e5 00:32:48.980 04:01:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 63a1da9796ea641762635676362212e5 != \6\3\a\1\d\a\9\7\9\6\e\a\6\4\1\7\6\2\6\3\5\6\7\6\3\6\2\2\1\2\e\5 ]] 00:32:48.980 04:01:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:48.980 04:01:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:48.980 04:01:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:32:48.980 04:01:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:48.980 04:01:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:48.980 04:01:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:48.980 04:01:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:48.980 04:01:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:48.980 04:01:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:49.238 [2024-07-26 04:01:03.923191] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:32:49.238 [2024-07-26 04:01:03.923336] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86902 ] 00:32:49.238 [2024-07-26 04:01:04.087714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.497 [2024-07-26 04:01:04.321872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:54.249  Copying: 443/1024 [MB] (443 MBps) Copying: 910/1024 [MB] (467 MBps) Copying: 1024/1024 [MB] (average 458 MBps) 00:32:54.249 00:32:54.249 04:01:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:32:54.249 04:01:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:56.151 04:01:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:56.151 04:01:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=8851e8c5c51382b764139cd64483e216 00:32:56.151 04:01:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 8851e8c5c51382b764139cd64483e216 != \8\8\5\1\e\8\c\5\c\5\1\3\8\2\b\7\6\4\1\3\9\c\d\6\4\4\8\3\e\2\1\6 ]] 00:32:56.151 04:01:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:56.151 04:01:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:56.151 04:01:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:32:56.151 04:01:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 86750 ]] 00:32:56.151 04:01:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 86750 00:32:56.151 04:01:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:32:56.151 04:01:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:32:56.151 04:01:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:32:56.151 04:01:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:32:56.151 04:01:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:56.151 04:01:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:56.151 04:01:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=86976 00:32:56.151 04:01:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:32:56.151 04:01:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 86976 00:32:56.151 04:01:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 86976 ']' 00:32:56.151 04:01:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:56.151 04:01:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:56.151 04:01:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:56.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:56.151 04:01:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:56.151 04:01:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:56.409 [2024-07-26 04:01:11.079455] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:32:56.409 [2024-07-26 04:01:11.079837] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86976 ] 00:32:56.409 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 828: 86750 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:32:56.409 [2024-07-26 04:01:11.239554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:56.668 [2024-07-26 04:01:11.426537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:57.605 [2024-07-26 04:01:12.230995] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:57.605 [2024-07-26 04:01:12.231082] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:57.605 [2024-07-26 04:01:12.379444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.605 [2024-07-26 04:01:12.379522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:32:57.605 [2024-07-26 04:01:12.379544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:32:57.605 [2024-07-26 04:01:12.379557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.605 [2024-07-26 04:01:12.379651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.605 [2024-07-26 04:01:12.379670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:57.605 [2024-07-26 04:01:12.379684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.061 ms 00:32:57.605 [2024-07-26 04:01:12.379696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.605 [2024-07-26 04:01:12.379734] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:32:57.605 [2024-07-26 04:01:12.380731] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:32:57.605 [2024-07-26 04:01:12.380787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.605 [2024-07-26 04:01:12.380802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:57.605 [2024-07-26 04:01:12.380833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.064 ms 00:32:57.605 [2024-07-26 04:01:12.380854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.605 [2024-07-26 04:01:12.381365] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:32:57.605 [2024-07-26 04:01:12.402600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.605 [2024-07-26 04:01:12.402684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:32:57.605 [2024-07-26 04:01:12.402720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.233 ms 00:32:57.605 [2024-07-26 04:01:12.402732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.605 [2024-07-26 04:01:12.415326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.605 [2024-07-26 04:01:12.415405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:32:57.605 [2024-07-26 04:01:12.415426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:32:57.605 [2024-07-26 04:01:12.415438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.605 [2024-07-26 04:01:12.416061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.605 [2024-07-26 04:01:12.416099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:57.605 [2024-07-26 04:01:12.416115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.476 ms 00:32:57.605 [2024-07-26 04:01:12.416127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.605 [2024-07-26 04:01:12.416205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.605 [2024-07-26 04:01:12.416225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:57.605 [2024-07-26 04:01:12.416238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:32:57.605 [2024-07-26 04:01:12.416249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.605 [2024-07-26 04:01:12.416306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.605 [2024-07-26 04:01:12.416324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:32:57.605 [2024-07-26 04:01:12.416340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:32:57.605 [2024-07-26 04:01:12.416352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.605 [2024-07-26 04:01:12.416390] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:32:57.605 [2024-07-26 04:01:12.420671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.605 [2024-07-26 04:01:12.420721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:57.605 [2024-07-26 04:01:12.420738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.291 ms 00:32:57.605 [2024-07-26 04:01:12.420751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.605 [2024-07-26 04:01:12.420795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.605 [2024-07-26 04:01:12.420841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:32:57.605 [2024-07-26 04:01:12.420857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:32:57.605 [2024-07-26 04:01:12.420869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.605 [2024-07-26 04:01:12.420930] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:32:57.605 [2024-07-26 04:01:12.420964] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:32:57.605 [2024-07-26 04:01:12.421010] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:32:57.605 [2024-07-26 04:01:12.421031] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:32:57.605 [2024-07-26 04:01:12.421137] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:32:57.605 [2024-07-26 04:01:12.421152] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:32:57.605 [2024-07-26 04:01:12.421167] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:32:57.605 [2024-07-26 04:01:12.421182] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:32:57.605 [2024-07-26 04:01:12.421196] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:32:57.605 [2024-07-26 04:01:12.421208] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:32:57.605 [2024-07-26 04:01:12.421224] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:32:57.605 [2024-07-26 04:01:12.421236] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:32:57.605 [2024-07-26 04:01:12.421247] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:32:57.605 [2024-07-26 04:01:12.421259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.605 [2024-07-26 04:01:12.421274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:32:57.605 [2024-07-26 04:01:12.421286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.332 ms 00:32:57.605 [2024-07-26 04:01:12.421298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.605 [2024-07-26 04:01:12.421402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.605 [2024-07-26 04:01:12.421418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:32:57.605 [2024-07-26 04:01:12.421430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:32:57.605 [2024-07-26 04:01:12.421447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.605 [2024-07-26 04:01:12.421561] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:32:57.605 [2024-07-26 04:01:12.421580] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:32:57.605 [2024-07-26 04:01:12.421593] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:57.605 [2024-07-26 04:01:12.421605] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:57.605 [2024-07-26 04:01:12.421616] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:32:57.605 [2024-07-26 04:01:12.421627] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:32:57.605 [2024-07-26 04:01:12.421638] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:32:57.605 [2024-07-26 04:01:12.421648] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:32:57.605 [2024-07-26 04:01:12.421660] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:32:57.605 [2024-07-26 04:01:12.421670] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:57.605 [2024-07-26 04:01:12.421680] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:32:57.605 [2024-07-26 04:01:12.421691] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:32:57.605 [2024-07-26 04:01:12.421701] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:57.605 [2024-07-26 04:01:12.421711] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:32:57.605 [2024-07-26 04:01:12.421722] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:32:57.605 [2024-07-26 04:01:12.421732] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:57.605 [2024-07-26 04:01:12.421742] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:32:57.605 [2024-07-26 04:01:12.421758] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:32:57.605 [2024-07-26 04:01:12.421768] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:57.605 [2024-07-26 04:01:12.421779] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:32:57.605 [2024-07-26 04:01:12.421790] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:32:57.605 [2024-07-26 04:01:12.421800] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:57.605 [2024-07-26 04:01:12.421810] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:32:57.605 [2024-07-26 04:01:12.422093] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:32:57.606 [2024-07-26 04:01:12.422139] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:57.606 [2024-07-26 04:01:12.422177] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:32:57.606 [2024-07-26 04:01:12.422291] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:32:57.606 [2024-07-26 04:01:12.422342] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:57.606 [2024-07-26 04:01:12.422388] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:32:57.606 [2024-07-26 04:01:12.422435] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:32:57.606 [2024-07-26 04:01:12.422475] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:57.606 [2024-07-26 04:01:12.422511] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:32:57.606 [2024-07-26 04:01:12.422547] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:32:57.606 [2024-07-26 04:01:12.422583] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:57.606 [2024-07-26 04:01:12.422693] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:32:57.606 [2024-07-26 04:01:12.422743] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:32:57.606 [2024-07-26 04:01:12.422858] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:57.606 [2024-07-26 04:01:12.422912] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:32:57.606 [2024-07-26 04:01:12.422994] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:32:57.606 [2024-07-26 04:01:12.423102] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:57.606 [2024-07-26 04:01:12.423156] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:32:57.606 [2024-07-26 04:01:12.423292] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:32:57.606 [2024-07-26 04:01:12.423343] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:57.606 [2024-07-26 04:01:12.423423] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:32:57.606 [2024-07-26 04:01:12.423528] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:32:57.606 [2024-07-26 04:01:12.423550] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:57.606 [2024-07-26 04:01:12.423563] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:57.606 [2024-07-26 04:01:12.423574] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:32:57.606 [2024-07-26 04:01:12.423586] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:32:57.606 [2024-07-26 04:01:12.423613] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:32:57.606 [2024-07-26 04:01:12.423625] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:32:57.606 [2024-07-26 04:01:12.423635] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:32:57.606 [2024-07-26 04:01:12.423646] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:32:57.606 [2024-07-26 04:01:12.423659] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:32:57.606 [2024-07-26 04:01:12.423678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:57.606 [2024-07-26 04:01:12.423691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:32:57.606 [2024-07-26 04:01:12.423703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:32:57.606 [2024-07-26 04:01:12.423714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:32:57.606 [2024-07-26 04:01:12.423726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:32:57.606 [2024-07-26 04:01:12.423738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:32:57.606 [2024-07-26 04:01:12.423749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:32:57.606 [2024-07-26 04:01:12.423760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:32:57.606 [2024-07-26 04:01:12.423772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:32:57.606 [2024-07-26 04:01:12.423783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:32:57.606 [2024-07-26 04:01:12.423795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:32:57.606 [2024-07-26 04:01:12.423806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:32:57.606 [2024-07-26 04:01:12.423849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:32:57.606 [2024-07-26 04:01:12.423865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:32:57.606 [2024-07-26 04:01:12.423878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:32:57.606 [2024-07-26 04:01:12.423889] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:32:57.606 [2024-07-26 04:01:12.423902] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:57.606 [2024-07-26 04:01:12.423914] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:57.606 [2024-07-26 04:01:12.423925] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:32:57.606 [2024-07-26 04:01:12.423938] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:32:57.606 [2024-07-26 04:01:12.423949] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:32:57.606 [2024-07-26 04:01:12.423963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.606 [2024-07-26 04:01:12.423975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:32:57.606 [2024-07-26 04:01:12.423988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.468 ms 00:32:57.606 [2024-07-26 04:01:12.424000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.606 [2024-07-26 04:01:12.456606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.606 [2024-07-26 04:01:12.456881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:57.606 [2024-07-26 04:01:12.457008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.515 ms 00:32:57.606 [2024-07-26 04:01:12.457122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.606 [2024-07-26 04:01:12.457242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.606 [2024-07-26 04:01:12.457366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:32:57.606 [2024-07-26 04:01:12.457530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:32:57.606 [2024-07-26 04:01:12.457588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.606 [2024-07-26 04:01:12.496495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.606 [2024-07-26 04:01:12.496752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:57.606 [2024-07-26 04:01:12.496895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.703 ms 00:32:57.606 [2024-07-26 04:01:12.497018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.606 [2024-07-26 04:01:12.497150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.606 [2024-07-26 04:01:12.497204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:57.606 [2024-07-26 04:01:12.497312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:57.606 [2024-07-26 04:01:12.497430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.606 [2024-07-26 04:01:12.497741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.606 [2024-07-26 04:01:12.497903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:57.606 [2024-07-26 04:01:12.498023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.078 ms 00:32:57.606 [2024-07-26 04:01:12.498154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.606 [2024-07-26 04:01:12.498263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.606 [2024-07-26 04:01:12.498328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:57.606 [2024-07-26 04:01:12.498471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:32:57.606 [2024-07-26 04:01:12.498594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.865 [2024-07-26 04:01:12.515947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.865 [2024-07-26 04:01:12.516196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:57.865 [2024-07-26 04:01:12.516335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.271 ms 00:32:57.865 [2024-07-26 04:01:12.516386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.865 [2024-07-26 04:01:12.516615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.865 [2024-07-26 04:01:12.516677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:32:57.865 [2024-07-26 04:01:12.516800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:32:57.865 [2024-07-26 04:01:12.516881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.865 [2024-07-26 04:01:12.549660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.865 [2024-07-26 04:01:12.549943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:32:57.865 [2024-07-26 04:01:12.550071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.699 ms 00:32:57.865 [2024-07-26 04:01:12.550123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.865 [2024-07-26 04:01:12.563518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.865 [2024-07-26 04:01:12.563684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:32:57.865 [2024-07-26 04:01:12.563861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.700 ms 00:32:57.865 [2024-07-26 04:01:12.563985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.865 [2024-07-26 04:01:12.641214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.865 [2024-07-26 04:01:12.641551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:32:57.865 [2024-07-26 04:01:12.641583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 77.077 ms 00:32:57.865 [2024-07-26 04:01:12.641597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.865 [2024-07-26 04:01:12.641854] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:32:57.865 [2024-07-26 04:01:12.641999] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:32:57.865 [2024-07-26 04:01:12.642151] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:32:57.865 [2024-07-26 04:01:12.642291] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:32:57.865 [2024-07-26 04:01:12.642319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.865 [2024-07-26 04:01:12.642334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:32:57.865 [2024-07-26 04:01:12.642354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.630 ms 00:32:57.865 [2024-07-26 04:01:12.642366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.865 [2024-07-26 04:01:12.642482] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:32:57.865 [2024-07-26 04:01:12.642504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.865 [2024-07-26 04:01:12.642516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:32:57.865 [2024-07-26 04:01:12.642529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:32:57.865 [2024-07-26 04:01:12.642540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.865 [2024-07-26 04:01:12.664440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.865 [2024-07-26 04:01:12.664510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:32:57.865 [2024-07-26 04:01:12.664531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.865 ms 00:32:57.865 [2024-07-26 04:01:12.664544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.865 [2024-07-26 04:01:12.677734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.865 [2024-07-26 04:01:12.677802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:32:57.865 [2024-07-26 04:01:12.677839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:32:57.865 [2024-07-26 04:01:12.677860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.865 [2024-07-26 04:01:12.678111] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:32:58.430 [2024-07-26 04:01:13.227617] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:32:58.430 [2024-07-26 04:01:13.227906] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:32:58.995 [2024-07-26 04:01:13.745567] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:32:58.995 [2024-07-26 04:01:13.745718] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:58.995 [2024-07-26 04:01:13.745745] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:32:58.995 [2024-07-26 04:01:13.745766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:58.995 [2024-07-26 04:01:13.745782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:32:58.995 [2024-07-26 04:01:13.745801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1067.788 ms 00:32:58.995 [2024-07-26 04:01:13.745831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:58.995 [2024-07-26 04:01:13.745894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:58.995 [2024-07-26 04:01:13.745913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:32:58.995 [2024-07-26 04:01:13.745930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:58.995 [2024-07-26 04:01:13.745943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:58.995 [2024-07-26 04:01:13.761892] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:32:58.995 [2024-07-26 04:01:13.762437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:58.995 [2024-07-26 04:01:13.762609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:32:58.995 [2024-07-26 04:01:13.762763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.457 ms 00:32:58.995 [2024-07-26 04:01:13.762841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:58.995 [2024-07-26 04:01:13.763949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:58.995 [2024-07-26 04:01:13.764145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:32:58.995 [2024-07-26 04:01:13.764286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.820 ms 00:32:58.995 [2024-07-26 04:01:13.764346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:58.995 [2024-07-26 04:01:13.767607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:58.995 [2024-07-26 04:01:13.767795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:32:58.995 [2024-07-26 04:01:13.767844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.126 ms 00:32:58.995 [2024-07-26 04:01:13.767861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:58.995 [2024-07-26 04:01:13.767953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:58.995 [2024-07-26 04:01:13.767976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:32:58.995 [2024-07-26 04:01:13.767991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:58.995 [2024-07-26 04:01:13.768005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:58.995 [2024-07-26 04:01:13.768165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:58.995 [2024-07-26 04:01:13.768190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:32:58.995 [2024-07-26 04:01:13.768206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:32:58.995 [2024-07-26 04:01:13.768219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:58.995 [2024-07-26 04:01:13.768258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:58.995 [2024-07-26 04:01:13.768276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:32:58.995 [2024-07-26 04:01:13.768291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:32:58.995 [2024-07-26 04:01:13.768304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:58.995 [2024-07-26 04:01:13.768352] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:32:58.995 [2024-07-26 04:01:13.768373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:58.995 [2024-07-26 04:01:13.768387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:32:58.995 [2024-07-26 04:01:13.768405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:32:58.995 [2024-07-26 04:01:13.768418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:58.995 [2024-07-26 04:01:13.768490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:58.995 [2024-07-26 04:01:13.768508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:32:58.995 [2024-07-26 04:01:13.768523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:32:58.995 [2024-07-26 04:01:13.768536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:58.995 [2024-07-26 04:01:13.769912] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1389.856 ms, result 0 00:32:58.995 [2024-07-26 04:01:13.784337] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:58.995 [2024-07-26 04:01:13.800311] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:32:58.995 [2024-07-26 04:01:13.810441] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:58.995 Validate MD5 checksum, iteration 1 00:32:58.995 04:01:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:58.995 04:01:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:32:58.995 04:01:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:58.995 04:01:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:32:58.995 04:01:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:32:58.995 04:01:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:32:58.995 04:01:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:32:58.995 04:01:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:58.995 04:01:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:32:58.995 04:01:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:58.995 04:01:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:58.995 04:01:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:58.995 04:01:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:58.995 04:01:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:58.995 04:01:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:59.254 [2024-07-26 04:01:13.938948] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:32:59.254 [2024-07-26 04:01:13.939291] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87012 ] 00:32:59.254 [2024-07-26 04:01:14.104470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:59.512 [2024-07-26 04:01:14.316393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:05.632  Copying: 503/1024 [MB] (503 MBps) Copying: 958/1024 [MB] (455 MBps) Copying: 1024/1024 [MB] (average 474 MBps) 00:33:05.632 00:33:05.632 04:01:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:33:05.632 04:01:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:08.186 04:01:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:08.186 Validate MD5 checksum, iteration 2 00:33:08.186 04:01:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=63a1da9796ea641762635676362212e5 00:33:08.186 04:01:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 63a1da9796ea641762635676362212e5 != \6\3\a\1\d\a\9\7\9\6\e\a\6\4\1\7\6\2\6\3\5\6\7\6\3\6\2\2\1\2\e\5 ]] 00:33:08.187 04:01:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:08.187 04:01:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:08.187 04:01:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:33:08.187 04:01:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:08.187 04:01:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:08.187 04:01:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:08.187 04:01:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:08.187 04:01:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:08.187 04:01:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:08.187 [2024-07-26 04:01:22.815907] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:33:08.187 [2024-07-26 04:01:22.816148] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87096 ] 00:33:08.187 [2024-07-26 04:01:22.991763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.446 [2024-07-26 04:01:23.252152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:13.177  Copying: 464/1024 [MB] (464 MBps) Copying: 971/1024 [MB] (507 MBps) Copying: 1024/1024 [MB] (average 485 MBps) 00:33:13.177 00:33:13.177 04:01:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:33:13.177 04:01:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:15.707 04:01:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:15.707 04:01:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=8851e8c5c51382b764139cd64483e216 00:33:15.707 04:01:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 8851e8c5c51382b764139cd64483e216 != \8\8\5\1\e\8\c\5\c\5\1\3\8\2\b\7\6\4\1\3\9\c\d\6\4\4\8\3\e\2\1\6 ]] 00:33:15.707 04:01:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:15.707 04:01:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:15.707 04:01:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:33:15.707 04:01:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:33:15.707 04:01:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:33:15.707 04:01:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:15.707 04:01:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:33:15.707 04:01:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:33:15.707 04:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:33:15.707 04:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:33:15.707 04:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 86976 ]] 00:33:15.707 04:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 86976 00:33:15.707 04:01:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 86976 ']' 00:33:15.707 04:01:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 86976 00:33:15.707 04:01:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:33:15.707 04:01:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:15.707 04:01:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86976 00:33:15.707 killing process with pid 86976 00:33:15.707 04:01:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:15.707 04:01:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:15.707 04:01:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86976' 00:33:15.707 04:01:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 86976 00:33:15.707 04:01:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 86976 00:33:16.671 [2024-07-26 04:01:31.355604] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:33:16.671 [2024-07-26 04:01:31.374315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.671 [2024-07-26 04:01:31.374386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:33:16.671 [2024-07-26 04:01:31.374410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:16.671 [2024-07-26 04:01:31.374422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.671 [2024-07-26 04:01:31.374457] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:33:16.671 [2024-07-26 04:01:31.377768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.671 [2024-07-26 04:01:31.377833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:33:16.671 [2024-07-26 04:01:31.377852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.286 ms 00:33:16.671 [2024-07-26 04:01:31.377865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.671 [2024-07-26 04:01:31.378137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.671 [2024-07-26 04:01:31.378158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:33:16.671 [2024-07-26 04:01:31.378171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.235 ms 00:33:16.671 [2024-07-26 04:01:31.378183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.671 [2024-07-26 04:01:31.379460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.672 [2024-07-26 04:01:31.379505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:33:16.672 [2024-07-26 04:01:31.379523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.253 ms 00:33:16.672 [2024-07-26 04:01:31.379543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.672 [2024-07-26 04:01:31.380790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.672 [2024-07-26 04:01:31.380839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:33:16.672 [2024-07-26 04:01:31.380857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.202 ms 00:33:16.672 [2024-07-26 04:01:31.380869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.672 [2024-07-26 04:01:31.393739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.672 [2024-07-26 04:01:31.393849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:33:16.672 [2024-07-26 04:01:31.393892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.769 ms 00:33:16.672 [2024-07-26 04:01:31.393906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.672 [2024-07-26 04:01:31.401101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.672 [2024-07-26 04:01:31.401192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:33:16.672 [2024-07-26 04:01:31.401213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.105 ms 00:33:16.672 [2024-07-26 04:01:31.401226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.672 [2024-07-26 04:01:31.401385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.672 [2024-07-26 04:01:31.401423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:33:16.672 [2024-07-26 04:01:31.401439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.079 ms 00:33:16.672 [2024-07-26 04:01:31.401456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.672 [2024-07-26 04:01:31.416019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.672 [2024-07-26 04:01:31.416102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:33:16.672 [2024-07-26 04:01:31.416124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.531 ms 00:33:16.672 [2024-07-26 04:01:31.416136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.672 [2024-07-26 04:01:31.429050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.672 [2024-07-26 04:01:31.429144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:33:16.672 [2024-07-26 04:01:31.429166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.847 ms 00:33:16.672 [2024-07-26 04:01:31.429178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.672 [2024-07-26 04:01:31.442455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.672 [2024-07-26 04:01:31.442531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:33:16.672 [2024-07-26 04:01:31.442554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.200 ms 00:33:16.672 [2024-07-26 04:01:31.442567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.672 [2024-07-26 04:01:31.455060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.672 [2024-07-26 04:01:31.455132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:33:16.672 [2024-07-26 04:01:31.455154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.395 ms 00:33:16.672 [2024-07-26 04:01:31.455166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.672 [2024-07-26 04:01:31.455213] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:33:16.672 [2024-07-26 04:01:31.455240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:33:16.672 [2024-07-26 04:01:31.455255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:33:16.672 [2024-07-26 04:01:31.455268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:33:16.672 [2024-07-26 04:01:31.455281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:16.672 [2024-07-26 04:01:31.455294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:16.672 [2024-07-26 04:01:31.455306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:16.672 [2024-07-26 04:01:31.455318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:16.672 [2024-07-26 04:01:31.455330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:16.672 [2024-07-26 04:01:31.455342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:16.672 [2024-07-26 04:01:31.455354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:16.672 [2024-07-26 04:01:31.455366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:16.672 [2024-07-26 04:01:31.455378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:16.672 [2024-07-26 04:01:31.455390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:16.672 [2024-07-26 04:01:31.455402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:16.672 [2024-07-26 04:01:31.455414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:16.672 [2024-07-26 04:01:31.455426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:16.672 [2024-07-26 04:01:31.455438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:16.672 [2024-07-26 04:01:31.455469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:16.672 [2024-07-26 04:01:31.455484] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:33:16.672 [2024-07-26 04:01:31.455496] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 217ba5db-72dd-41a3-b600-c765b1c16a47 00:33:16.672 [2024-07-26 04:01:31.455508] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:33:16.672 [2024-07-26 04:01:31.455519] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:33:16.672 [2024-07-26 04:01:31.455530] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:33:16.672 [2024-07-26 04:01:31.455542] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:33:16.672 [2024-07-26 04:01:31.455553] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:33:16.672 [2024-07-26 04:01:31.455564] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:33:16.672 [2024-07-26 04:01:31.455581] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:33:16.672 [2024-07-26 04:01:31.455591] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:33:16.672 [2024-07-26 04:01:31.455601] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:33:16.672 [2024-07-26 04:01:31.455614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.672 [2024-07-26 04:01:31.455627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:33:16.672 [2024-07-26 04:01:31.455640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.402 ms 00:33:16.672 [2024-07-26 04:01:31.455652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.672 [2024-07-26 04:01:31.473147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.672 [2024-07-26 04:01:31.473216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:33:16.672 [2024-07-26 04:01:31.473238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.441 ms 00:33:16.672 [2024-07-26 04:01:31.473261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.672 [2024-07-26 04:01:31.473721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.672 [2024-07-26 04:01:31.473750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:33:16.672 [2024-07-26 04:01:31.473771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.413 ms 00:33:16.672 [2024-07-26 04:01:31.473784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.672 [2024-07-26 04:01:31.530645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:16.672 [2024-07-26 04:01:31.530725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:16.672 [2024-07-26 04:01:31.530757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:16.672 [2024-07-26 04:01:31.530787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.672 [2024-07-26 04:01:31.530882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:16.672 [2024-07-26 04:01:31.530904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:16.672 [2024-07-26 04:01:31.530918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:16.672 [2024-07-26 04:01:31.530930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.672 [2024-07-26 04:01:31.531066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:16.672 [2024-07-26 04:01:31.531088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:16.672 [2024-07-26 04:01:31.531114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:16.672 [2024-07-26 04:01:31.531130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.672 [2024-07-26 04:01:31.531166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:16.672 [2024-07-26 04:01:31.531181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:16.672 [2024-07-26 04:01:31.531193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:16.672 [2024-07-26 04:01:31.531204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.933 [2024-07-26 04:01:31.630335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:16.933 [2024-07-26 04:01:31.630407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:16.933 [2024-07-26 04:01:31.630427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:16.933 [2024-07-26 04:01:31.630447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.933 [2024-07-26 04:01:31.714929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:16.933 [2024-07-26 04:01:31.715017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:16.933 [2024-07-26 04:01:31.715039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:16.933 [2024-07-26 04:01:31.715052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.933 [2024-07-26 04:01:31.715205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:16.933 [2024-07-26 04:01:31.715227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:16.933 [2024-07-26 04:01:31.715241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:16.933 [2024-07-26 04:01:31.715253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.933 [2024-07-26 04:01:31.715317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:16.933 [2024-07-26 04:01:31.715347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:16.933 [2024-07-26 04:01:31.715361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:16.933 [2024-07-26 04:01:31.715372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.933 [2024-07-26 04:01:31.715509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:16.933 [2024-07-26 04:01:31.715537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:16.933 [2024-07-26 04:01:31.715550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:16.933 [2024-07-26 04:01:31.715561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.933 [2024-07-26 04:01:31.715613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:16.933 [2024-07-26 04:01:31.715638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:33:16.933 [2024-07-26 04:01:31.715651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:16.933 [2024-07-26 04:01:31.715662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.933 [2024-07-26 04:01:31.715708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:16.933 [2024-07-26 04:01:31.715725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:16.933 [2024-07-26 04:01:31.715737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:16.933 [2024-07-26 04:01:31.715748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.933 [2024-07-26 04:01:31.715857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:16.933 [2024-07-26 04:01:31.715885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:16.933 [2024-07-26 04:01:31.715898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:16.933 [2024-07-26 04:01:31.715910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.933 [2024-07-26 04:01:31.716062] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 341.713 ms, result 0 00:33:18.310 04:01:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:33:18.310 04:01:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:18.310 04:01:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:33:18.310 04:01:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:33:18.310 04:01:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:33:18.310 04:01:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:18.310 Remove shared memory files 00:33:18.310 04:01:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:33:18.310 04:01:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:18.310 04:01:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:33:18.310 04:01:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:33:18.310 04:01:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid86750 00:33:18.310 04:01:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:18.310 04:01:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:33:18.310 ************************************ 00:33:18.310 END TEST ftl_upgrade_shutdown 00:33:18.310 ************************************ 00:33:18.310 00:33:18.310 real 1m36.930s 00:33:18.310 user 2m19.803s 00:33:18.310 sys 0m23.148s 00:33:18.310 04:01:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:18.310 04:01:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:18.310 Process with pid 79628 is not found 00:33:18.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:18.310 04:01:32 ftl -- common/autotest_common.sh@1142 -- # return 0 00:33:18.310 04:01:32 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:33:18.310 04:01:32 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:33:18.310 04:01:32 ftl -- ftl/ftl.sh@14 -- # killprocess 79628 00:33:18.310 04:01:32 ftl -- common/autotest_common.sh@948 -- # '[' -z 79628 ']' 00:33:18.310 04:01:32 ftl -- common/autotest_common.sh@952 -- # kill -0 79628 00:33:18.310 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (79628) - No such process 00:33:18.310 04:01:32 ftl -- common/autotest_common.sh@975 -- # echo 'Process with pid 79628 is not found' 00:33:18.310 04:01:32 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:33:18.310 04:01:32 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=87232 00:33:18.310 04:01:32 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:18.310 04:01:32 ftl -- ftl/ftl.sh@20 -- # waitforlisten 87232 00:33:18.310 04:01:32 ftl -- common/autotest_common.sh@829 -- # '[' -z 87232 ']' 00:33:18.310 04:01:32 ftl -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:18.310 04:01:32 ftl -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:18.310 04:01:32 ftl -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:18.310 04:01:32 ftl -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:18.310 04:01:32 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:18.310 [2024-07-26 04:01:33.069765] Starting SPDK v24.09-pre git sha1 764779691 / DPDK 24.03.0 initialization... 00:33:18.310 [2024-07-26 04:01:33.070166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87232 ] 00:33:18.567 [2024-07-26 04:01:33.242922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.567 [2024-07-26 04:01:33.460633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:19.500 04:01:34 ftl -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:19.501 04:01:34 ftl -- common/autotest_common.sh@862 -- # return 0 00:33:19.501 04:01:34 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:19.759 nvme0n1 00:33:19.759 04:01:34 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:33:19.759 04:01:34 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:19.759 04:01:34 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:20.016 04:01:34 ftl -- ftl/common.sh@28 -- # stores=757bbeb4-531b-40e2-ad61-ce390074bd54 00:33:20.016 04:01:34 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:33:20.016 04:01:34 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 757bbeb4-531b-40e2-ad61-ce390074bd54 00:33:20.275 04:01:34 ftl -- ftl/ftl.sh@23 -- # killprocess 87232 00:33:20.275 04:01:34 ftl -- common/autotest_common.sh@948 -- # '[' -z 87232 ']' 00:33:20.275 04:01:34 ftl -- common/autotest_common.sh@952 -- # kill -0 87232 00:33:20.275 04:01:34 ftl -- common/autotest_common.sh@953 -- # uname 00:33:20.275 04:01:34 ftl -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:20.275 04:01:34 ftl -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87232 00:33:20.275 killing process with pid 87232 00:33:20.275 04:01:34 ftl -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:20.275 04:01:34 ftl -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:20.275 04:01:34 ftl -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87232' 00:33:20.275 04:01:34 ftl -- common/autotest_common.sh@967 -- # kill 87232 00:33:20.275 04:01:34 ftl -- common/autotest_common.sh@972 -- # wait 87232 00:33:22.201 04:01:37 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:22.459 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:22.459 Waiting for block devices as requested 00:33:22.459 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:22.718 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:22.718 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:33:22.977 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:33:28.244 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:33:28.244 04:01:42 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:33:28.244 04:01:42 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:28.244 Remove shared memory files 00:33:28.244 04:01:42 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:33:28.244 04:01:42 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:33:28.244 04:01:42 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:33:28.244 04:01:42 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:28.244 04:01:42 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:33:28.244 ************************************ 00:33:28.244 END TEST ftl 00:33:28.244 ************************************ 00:33:28.244 00:33:28.244 real 11m32.452s 00:33:28.244 user 14m32.221s 00:33:28.244 sys 1m32.458s 00:33:28.244 04:01:42 ftl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:28.244 04:01:42 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:28.244 04:01:42 -- common/autotest_common.sh@1142 -- # return 0 00:33:28.244 04:01:42 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:33:28.244 04:01:42 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:33:28.244 04:01:42 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:33:28.244 04:01:42 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:33:28.244 04:01:42 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:33:28.244 04:01:42 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:33:28.244 04:01:42 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:33:28.244 04:01:42 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:33:28.244 04:01:42 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:33:28.244 04:01:42 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:33:28.244 04:01:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:28.244 04:01:42 -- common/autotest_common.sh@10 -- # set +x 00:33:28.244 04:01:42 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:33:28.244 04:01:42 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:33:28.244 04:01:42 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:33:28.244 04:01:42 -- common/autotest_common.sh@10 -- # set +x 00:33:29.622 INFO: APP EXITING 00:33:29.622 INFO: killing all VMs 00:33:29.622 INFO: killing vhost app 00:33:29.622 INFO: EXIT DONE 00:33:29.622 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:30.190 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:33:30.190 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:33:30.190 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:33:30.190 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:33:30.448 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:31.015 Cleaning 00:33:31.015 Removing: /var/run/dpdk/spdk0/config 00:33:31.015 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:31.015 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:31.015 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:31.015 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:31.015 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:31.015 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:31.015 Removing: /var/run/dpdk/spdk0 00:33:31.015 Removing: /var/run/dpdk/spdk_pid62297 00:33:31.015 Removing: /var/run/dpdk/spdk_pid62513 00:33:31.015 Removing: /var/run/dpdk/spdk_pid62729 00:33:31.015 Removing: /var/run/dpdk/spdk_pid62833 00:33:31.015 Removing: /var/run/dpdk/spdk_pid62884 00:33:31.015 Removing: /var/run/dpdk/spdk_pid63020 00:33:31.015 Removing: /var/run/dpdk/spdk_pid63038 00:33:31.015 Removing: /var/run/dpdk/spdk_pid63225 00:33:31.015 Removing: /var/run/dpdk/spdk_pid63324 00:33:31.015 Removing: /var/run/dpdk/spdk_pid63418 00:33:31.015 Removing: /var/run/dpdk/spdk_pid63533 00:33:31.015 Removing: /var/run/dpdk/spdk_pid63633 00:33:31.015 Removing: /var/run/dpdk/spdk_pid63678 00:33:31.015 Removing: /var/run/dpdk/spdk_pid63720 00:33:31.015 Removing: /var/run/dpdk/spdk_pid63783 00:33:31.015 Removing: /var/run/dpdk/spdk_pid63894 00:33:31.015 Removing: /var/run/dpdk/spdk_pid64372 00:33:31.016 Removing: /var/run/dpdk/spdk_pid64442 00:33:31.016 Removing: /var/run/dpdk/spdk_pid64516 00:33:31.016 Removing: /var/run/dpdk/spdk_pid64532 00:33:31.016 Removing: /var/run/dpdk/spdk_pid64690 00:33:31.016 Removing: /var/run/dpdk/spdk_pid64707 00:33:31.016 Removing: /var/run/dpdk/spdk_pid64842 00:33:31.016 Removing: /var/run/dpdk/spdk_pid64858 00:33:31.016 Removing: /var/run/dpdk/spdk_pid64928 00:33:31.016 Removing: /var/run/dpdk/spdk_pid64946 00:33:31.016 Removing: /var/run/dpdk/spdk_pid65010 00:33:31.016 Removing: /var/run/dpdk/spdk_pid65039 00:33:31.016 Removing: /var/run/dpdk/spdk_pid65226 00:33:31.016 Removing: /var/run/dpdk/spdk_pid65268 00:33:31.016 Removing: /var/run/dpdk/spdk_pid65349 00:33:31.016 Removing: /var/run/dpdk/spdk_pid65425 00:33:31.016 Removing: /var/run/dpdk/spdk_pid65460 00:33:31.016 Removing: /var/run/dpdk/spdk_pid65534 00:33:31.016 Removing: /var/run/dpdk/spdk_pid65585 00:33:31.016 Removing: /var/run/dpdk/spdk_pid65627 00:33:31.016 Removing: /var/run/dpdk/spdk_pid65674 00:33:31.016 Removing: /var/run/dpdk/spdk_pid65720 00:33:31.016 Removing: /var/run/dpdk/spdk_pid65767 00:33:31.016 Removing: /var/run/dpdk/spdk_pid65808 00:33:31.016 Removing: /var/run/dpdk/spdk_pid65860 00:33:31.016 Removing: /var/run/dpdk/spdk_pid65901 00:33:31.016 Removing: /var/run/dpdk/spdk_pid65948 00:33:31.016 Removing: /var/run/dpdk/spdk_pid65994 00:33:31.016 Removing: /var/run/dpdk/spdk_pid66041 00:33:31.016 Removing: /var/run/dpdk/spdk_pid66087 00:33:31.016 Removing: /var/run/dpdk/spdk_pid66134 00:33:31.016 Removing: /var/run/dpdk/spdk_pid66175 00:33:31.016 Removing: /var/run/dpdk/spdk_pid66227 00:33:31.016 Removing: /var/run/dpdk/spdk_pid66268 00:33:31.016 Removing: /var/run/dpdk/spdk_pid66323 00:33:31.016 Removing: /var/run/dpdk/spdk_pid66373 00:33:31.016 Removing: /var/run/dpdk/spdk_pid66419 00:33:31.016 Removing: /var/run/dpdk/spdk_pid66467 00:33:31.016 Removing: /var/run/dpdk/spdk_pid66549 00:33:31.016 Removing: /var/run/dpdk/spdk_pid66665 00:33:31.016 Removing: /var/run/dpdk/spdk_pid66832 00:33:31.016 Removing: /var/run/dpdk/spdk_pid66933 00:33:31.016 Removing: /var/run/dpdk/spdk_pid66975 00:33:31.016 Removing: /var/run/dpdk/spdk_pid67446 00:33:31.016 Removing: /var/run/dpdk/spdk_pid67550 00:33:31.016 Removing: /var/run/dpdk/spdk_pid67670 00:33:31.016 Removing: /var/run/dpdk/spdk_pid67723 00:33:31.016 Removing: /var/run/dpdk/spdk_pid67753 00:33:31.275 Removing: /var/run/dpdk/spdk_pid67829 00:33:31.275 Removing: /var/run/dpdk/spdk_pid68463 00:33:31.275 Removing: /var/run/dpdk/spdk_pid68510 00:33:31.275 Removing: /var/run/dpdk/spdk_pid69035 00:33:31.275 Removing: /var/run/dpdk/spdk_pid69133 00:33:31.275 Removing: /var/run/dpdk/spdk_pid69248 00:33:31.275 Removing: /var/run/dpdk/spdk_pid69311 00:33:31.275 Removing: /var/run/dpdk/spdk_pid69338 00:33:31.275 Removing: /var/run/dpdk/spdk_pid69369 00:33:31.275 Removing: /var/run/dpdk/spdk_pid71225 00:33:31.275 Removing: /var/run/dpdk/spdk_pid71369 00:33:31.275 Removing: /var/run/dpdk/spdk_pid71373 00:33:31.275 Removing: /var/run/dpdk/spdk_pid71401 00:33:31.275 Removing: /var/run/dpdk/spdk_pid71446 00:33:31.275 Removing: /var/run/dpdk/spdk_pid71450 00:33:31.275 Removing: /var/run/dpdk/spdk_pid71462 00:33:31.275 Removing: /var/run/dpdk/spdk_pid71507 00:33:31.275 Removing: /var/run/dpdk/spdk_pid71511 00:33:31.275 Removing: /var/run/dpdk/spdk_pid71523 00:33:31.275 Removing: /var/run/dpdk/spdk_pid71568 00:33:31.275 Removing: /var/run/dpdk/spdk_pid71572 00:33:31.275 Removing: /var/run/dpdk/spdk_pid71584 00:33:31.275 Removing: /var/run/dpdk/spdk_pid72931 00:33:31.275 Removing: /var/run/dpdk/spdk_pid73031 00:33:31.275 Removing: /var/run/dpdk/spdk_pid74425 00:33:31.275 Removing: /var/run/dpdk/spdk_pid75784 00:33:31.275 Removing: /var/run/dpdk/spdk_pid75905 00:33:31.275 Removing: /var/run/dpdk/spdk_pid76030 00:33:31.275 Removing: /var/run/dpdk/spdk_pid76146 00:33:31.275 Removing: /var/run/dpdk/spdk_pid76294 00:33:31.275 Removing: /var/run/dpdk/spdk_pid76372 00:33:31.275 Removing: /var/run/dpdk/spdk_pid76512 00:33:31.275 Removing: /var/run/dpdk/spdk_pid76882 00:33:31.275 Removing: /var/run/dpdk/spdk_pid76924 00:33:31.275 Removing: /var/run/dpdk/spdk_pid77392 00:33:31.275 Removing: /var/run/dpdk/spdk_pid77586 00:33:31.275 Removing: /var/run/dpdk/spdk_pid77686 00:33:31.275 Removing: /var/run/dpdk/spdk_pid77797 00:33:31.275 Removing: /var/run/dpdk/spdk_pid77856 00:33:31.275 Removing: /var/run/dpdk/spdk_pid77886 00:33:31.275 Removing: /var/run/dpdk/spdk_pid78191 00:33:31.275 Removing: /var/run/dpdk/spdk_pid78246 00:33:31.275 Removing: /var/run/dpdk/spdk_pid78325 00:33:31.275 Removing: /var/run/dpdk/spdk_pid78710 00:33:31.275 Removing: /var/run/dpdk/spdk_pid78851 00:33:31.275 Removing: /var/run/dpdk/spdk_pid79628 00:33:31.275 Removing: /var/run/dpdk/spdk_pid79769 00:33:31.275 Removing: /var/run/dpdk/spdk_pid79958 00:33:31.275 Removing: /var/run/dpdk/spdk_pid80061 00:33:31.275 Removing: /var/run/dpdk/spdk_pid80439 00:33:31.275 Removing: /var/run/dpdk/spdk_pid80716 00:33:31.275 Removing: /var/run/dpdk/spdk_pid81065 00:33:31.275 Removing: /var/run/dpdk/spdk_pid81264 00:33:31.275 Removing: /var/run/dpdk/spdk_pid81402 00:33:31.275 Removing: /var/run/dpdk/spdk_pid81467 00:33:31.275 Removing: /var/run/dpdk/spdk_pid81606 00:33:31.275 Removing: /var/run/dpdk/spdk_pid81637 00:33:31.275 Removing: /var/run/dpdk/spdk_pid81702 00:33:31.275 Removing: /var/run/dpdk/spdk_pid81896 00:33:31.275 Removing: /var/run/dpdk/spdk_pid82140 00:33:31.275 Removing: /var/run/dpdk/spdk_pid82515 00:33:31.275 Removing: /var/run/dpdk/spdk_pid82958 00:33:31.275 Removing: /var/run/dpdk/spdk_pid83352 00:33:31.275 Removing: /var/run/dpdk/spdk_pid83868 00:33:31.275 Removing: /var/run/dpdk/spdk_pid84005 00:33:31.275 Removing: /var/run/dpdk/spdk_pid84115 00:33:31.275 Removing: /var/run/dpdk/spdk_pid84747 00:33:31.275 Removing: /var/run/dpdk/spdk_pid84828 00:33:31.275 Removing: /var/run/dpdk/spdk_pid85255 00:33:31.275 Removing: /var/run/dpdk/spdk_pid85655 00:33:31.275 Removing: /var/run/dpdk/spdk_pid86149 00:33:31.275 Removing: /var/run/dpdk/spdk_pid86266 00:33:31.275 Removing: /var/run/dpdk/spdk_pid86319 00:33:31.275 Removing: /var/run/dpdk/spdk_pid86389 00:33:31.275 Removing: /var/run/dpdk/spdk_pid86456 00:33:31.275 Removing: /var/run/dpdk/spdk_pid86531 00:33:31.275 Removing: /var/run/dpdk/spdk_pid86750 00:33:31.275 Removing: /var/run/dpdk/spdk_pid86829 00:33:31.275 Removing: /var/run/dpdk/spdk_pid86902 00:33:31.275 Removing: /var/run/dpdk/spdk_pid86976 00:33:31.275 Removing: /var/run/dpdk/spdk_pid87012 00:33:31.275 Removing: /var/run/dpdk/spdk_pid87096 00:33:31.275 Removing: /var/run/dpdk/spdk_pid87232 00:33:31.275 Clean 00:33:31.534 04:01:46 -- common/autotest_common.sh@1451 -- # return 0 00:33:31.534 04:01:46 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:33:31.534 04:01:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:31.534 04:01:46 -- common/autotest_common.sh@10 -- # set +x 00:33:31.534 04:01:46 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:33:31.534 04:01:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:31.534 04:01:46 -- common/autotest_common.sh@10 -- # set +x 00:33:31.534 04:01:46 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:31.534 04:01:46 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:33:31.534 04:01:46 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:33:31.534 04:01:46 -- spdk/autotest.sh@391 -- # hash lcov 00:33:31.534 04:01:46 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:31.534 04:01:46 -- spdk/autotest.sh@393 -- # hostname 00:33:31.534 04:01:46 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:33:31.792 geninfo: WARNING: invalid characters removed from testname! 00:34:03.905 04:02:14 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:04.472 04:02:19 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:07.007 04:02:21 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:10.294 04:02:24 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:12.821 04:02:27 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:16.123 04:02:30 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:19.430 04:02:33 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:19.430 04:02:33 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:19.430 04:02:33 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:19.430 04:02:33 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:19.430 04:02:33 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:19.430 04:02:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.430 04:02:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.430 04:02:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.430 04:02:33 -- paths/export.sh@5 -- $ export PATH 00:34:19.430 04:02:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.430 04:02:33 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:34:19.430 04:02:33 -- common/autobuild_common.sh@447 -- $ date +%s 00:34:19.430 04:02:33 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721966553.XXXXXX 00:34:19.430 04:02:33 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721966553.OnIqK1 00:34:19.430 04:02:33 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:34:19.430 04:02:33 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:34:19.430 04:02:33 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:34:19.430 04:02:33 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:34:19.430 04:02:33 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:34:19.430 04:02:33 -- common/autobuild_common.sh@463 -- $ get_config_params 00:34:19.430 04:02:33 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:34:19.430 04:02:33 -- common/autotest_common.sh@10 -- $ set +x 00:34:19.430 04:02:33 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:34:19.430 04:02:33 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:34:19.430 04:02:33 -- pm/common@17 -- $ local monitor 00:34:19.430 04:02:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:19.430 04:02:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:19.430 04:02:33 -- pm/common@25 -- $ sleep 1 00:34:19.430 04:02:33 -- pm/common@21 -- $ date +%s 00:34:19.430 04:02:33 -- pm/common@21 -- $ date +%s 00:34:19.430 04:02:33 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721966553 00:34:19.430 04:02:33 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721966553 00:34:19.430 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721966553_collect-vmstat.pm.log 00:34:19.430 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721966553_collect-cpu-load.pm.log 00:34:19.998 04:02:34 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:34:19.998 04:02:34 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:34:19.998 04:02:34 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:34:19.998 04:02:34 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:34:19.998 04:02:34 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:34:19.998 04:02:34 -- spdk/autopackage.sh@19 -- $ timing_finish 00:34:19.998 04:02:34 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:19.998 04:02:34 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:34:19.998 04:02:34 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:19.998 04:02:34 -- spdk/autopackage.sh@20 -- $ exit 0 00:34:19.998 04:02:34 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:19.998 04:02:34 -- pm/common@29 -- $ signal_monitor_resources TERM 00:34:19.998 04:02:34 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:34:19.998 04:02:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:19.998 04:02:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:34:19.998 04:02:34 -- pm/common@44 -- $ pid=88946 00:34:19.998 04:02:34 -- pm/common@50 -- $ kill -TERM 88946 00:34:19.998 04:02:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:19.998 04:02:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:34:19.998 04:02:34 -- pm/common@44 -- $ pid=88947 00:34:19.998 04:02:34 -- pm/common@50 -- $ kill -TERM 88947 00:34:19.998 + [[ -n 5202 ]] 00:34:19.998 + sudo kill 5202 00:34:20.008 [Pipeline] } 00:34:20.027 [Pipeline] // timeout 00:34:20.034 [Pipeline] } 00:34:20.072 [Pipeline] // stage 00:34:20.077 [Pipeline] } 00:34:20.093 [Pipeline] // catchError 00:34:20.102 [Pipeline] stage 00:34:20.104 [Pipeline] { (Stop VM) 00:34:20.119 [Pipeline] sh 00:34:20.400 + vagrant halt 00:34:24.590 ==> default: Halting domain... 00:34:29.897 [Pipeline] sh 00:34:30.176 + vagrant destroy -f 00:34:33.465 ==> default: Removing domain... 00:34:34.047 [Pipeline] sh 00:34:34.328 + mv output /var/jenkins/workspace/nvme-vg-autotest_3/output 00:34:34.337 [Pipeline] } 00:34:34.354 [Pipeline] // stage 00:34:34.359 [Pipeline] } 00:34:34.376 [Pipeline] // dir 00:34:34.382 [Pipeline] } 00:34:34.399 [Pipeline] // wrap 00:34:34.406 [Pipeline] } 00:34:34.420 [Pipeline] // catchError 00:34:34.429 [Pipeline] stage 00:34:34.431 [Pipeline] { (Epilogue) 00:34:34.446 [Pipeline] sh 00:34:34.727 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:41.302 [Pipeline] catchError 00:34:41.304 [Pipeline] { 00:34:41.317 [Pipeline] sh 00:34:41.597 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:41.856 Artifacts sizes are good 00:34:41.864 [Pipeline] } 00:34:41.877 [Pipeline] // catchError 00:34:41.887 [Pipeline] archiveArtifacts 00:34:41.893 Archiving artifacts 00:34:42.035 [Pipeline] cleanWs 00:34:42.043 [WS-CLEANUP] Deleting project workspace... 00:34:42.043 [WS-CLEANUP] Deferred wipeout is used... 00:34:42.050 [WS-CLEANUP] done 00:34:42.052 [Pipeline] } 00:34:42.066 [Pipeline] // stage 00:34:42.069 [Pipeline] } 00:34:42.081 [Pipeline] // node 00:34:42.086 [Pipeline] End of Pipeline 00:34:42.115 Finished: SUCCESS