00:00:00.001 Started by upstream project "autotest-per-patch" build number 127179 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "jbp-per-patch" build number 24320 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.110 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.111 The recommended git tool is: git 00:00:00.111 using credential 00000000-0000-0000-0000-000000000002 00:00:00.113 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.149 Fetching changes from the remote Git repository 00:00:00.151 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.185 Using shallow fetch with depth 1 00:00:00.185 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.185 > git --version # timeout=10 00:00:00.213 > git --version # 'git version 2.39.2' 00:00:00.213 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.229 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.229 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/32/24332/3 # timeout=5 00:00:05.439 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.451 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.462 Checking out Revision 42e00731b22fe9a8063e4b475dece9d4b345521a (FETCH_HEAD) 00:00:05.462 > git config core.sparsecheckout # timeout=10 00:00:05.472 > git read-tree -mu HEAD # timeout=10 00:00:05.487 > git checkout -f 42e00731b22fe9a8063e4b475dece9d4b345521a # timeout=5 00:00:05.508 Commit message: "jjb/autotest: add SPDK_TEST_RAID flag for docker-autotest jobs" 00:00:05.508 > git rev-list --no-walk 8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b # timeout=10 00:00:05.621 [Pipeline] Start of Pipeline 00:00:05.632 [Pipeline] library 00:00:05.633 Loading library shm_lib@master 00:00:05.633 Library shm_lib@master is cached. Copying from home. 00:00:05.647 [Pipeline] node 00:00:05.662 Running on VM-host-SM9 in /var/jenkins/workspace/nvme-vg-autotest_2 00:00:05.664 [Pipeline] { 00:00:05.674 [Pipeline] catchError 00:00:05.676 [Pipeline] { 00:00:05.692 [Pipeline] wrap 00:00:05.704 [Pipeline] { 00:00:05.714 [Pipeline] stage 00:00:05.716 [Pipeline] { (Prologue) 00:00:05.737 [Pipeline] echo 00:00:05.739 Node: VM-host-SM9 00:00:05.744 [Pipeline] cleanWs 00:00:05.753 [WS-CLEANUP] Deleting project workspace... 00:00:05.753 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.759 [WS-CLEANUP] done 00:00:05.933 [Pipeline] setCustomBuildProperty 00:00:06.001 [Pipeline] httpRequest 00:00:06.017 [Pipeline] echo 00:00:06.018 Sorcerer 10.211.164.101 is alive 00:00:06.025 [Pipeline] httpRequest 00:00:06.029 HttpMethod: GET 00:00:06.029 URL: http://10.211.164.101/packages/jbp_42e00731b22fe9a8063e4b475dece9d4b345521a.tar.gz 00:00:06.030 Sending request to url: http://10.211.164.101/packages/jbp_42e00731b22fe9a8063e4b475dece9d4b345521a.tar.gz 00:00:06.034 Response Code: HTTP/1.1 200 OK 00:00:06.034 Success: Status code 200 is in the accepted range: 200,404 00:00:06.034 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/jbp_42e00731b22fe9a8063e4b475dece9d4b345521a.tar.gz 00:00:07.913 [Pipeline] sh 00:00:08.187 + tar --no-same-owner -xf jbp_42e00731b22fe9a8063e4b475dece9d4b345521a.tar.gz 00:00:08.201 [Pipeline] httpRequest 00:00:08.233 [Pipeline] echo 00:00:08.235 Sorcerer 10.211.164.101 is alive 00:00:08.243 [Pipeline] httpRequest 00:00:08.246 HttpMethod: GET 00:00:08.247 URL: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:08.247 Sending request to url: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:08.248 Response Code: HTTP/1.1 200 OK 00:00:08.249 Success: Status code 200 is in the accepted range: 200,404 00:00:08.249 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:26.168 [Pipeline] sh 00:00:26.447 + tar --no-same-owner -xf spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:28.989 [Pipeline] sh 00:00:29.267 + git -C spdk log --oneline -n5 00:00:29.267 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:00:29.267 fc2398dfa raid: clear base bdev configure_cb after executing 00:00:29.267 5558f3f50 raid: complete bdev_raid_create after sb is written 00:00:29.267 d005e023b raid: fix empty slot not updated in sb after resize 00:00:29.267 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:00:29.285 [Pipeline] writeFile 00:00:29.300 [Pipeline] sh 00:00:29.580 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:29.593 [Pipeline] sh 00:00:29.872 + cat autorun-spdk.conf 00:00:29.872 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:29.872 SPDK_TEST_NVME=1 00:00:29.872 SPDK_TEST_FTL=1 00:00:29.872 SPDK_TEST_ISAL=1 00:00:29.872 SPDK_RUN_ASAN=1 00:00:29.872 SPDK_RUN_UBSAN=1 00:00:29.872 SPDK_TEST_XNVME=1 00:00:29.872 SPDK_TEST_NVME_FDP=1 00:00:29.872 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:29.879 RUN_NIGHTLY=0 00:00:29.881 [Pipeline] } 00:00:29.891 [Pipeline] // stage 00:00:29.902 [Pipeline] stage 00:00:29.904 [Pipeline] { (Run VM) 00:00:29.914 [Pipeline] sh 00:00:30.187 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:30.187 + echo 'Start stage prepare_nvme.sh' 00:00:30.187 Start stage prepare_nvme.sh 00:00:30.187 + [[ -n 4 ]] 00:00:30.187 + disk_prefix=ex4 00:00:30.187 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_2 ]] 00:00:30.187 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf ]] 00:00:30.187 + source /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf 00:00:30.187 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:30.187 ++ SPDK_TEST_NVME=1 00:00:30.187 ++ SPDK_TEST_FTL=1 00:00:30.187 ++ SPDK_TEST_ISAL=1 00:00:30.187 ++ SPDK_RUN_ASAN=1 00:00:30.187 ++ SPDK_RUN_UBSAN=1 00:00:30.187 ++ SPDK_TEST_XNVME=1 00:00:30.187 ++ SPDK_TEST_NVME_FDP=1 00:00:30.187 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:30.187 ++ RUN_NIGHTLY=0 00:00:30.187 + cd /var/jenkins/workspace/nvme-vg-autotest_2 00:00:30.187 + nvme_files=() 00:00:30.187 + declare -A nvme_files 00:00:30.187 + backend_dir=/var/lib/libvirt/images/backends 00:00:30.187 + nvme_files['nvme.img']=5G 00:00:30.187 + nvme_files['nvme-cmb.img']=5G 00:00:30.187 + nvme_files['nvme-multi0.img']=4G 00:00:30.187 + nvme_files['nvme-multi1.img']=4G 00:00:30.187 + nvme_files['nvme-multi2.img']=4G 00:00:30.187 + nvme_files['nvme-openstack.img']=8G 00:00:30.187 + nvme_files['nvme-zns.img']=5G 00:00:30.187 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:30.187 + (( SPDK_TEST_FTL == 1 )) 00:00:30.187 + nvme_files["nvme-ftl.img"]=6G 00:00:30.187 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:30.187 + nvme_files["nvme-fdp.img"]=1G 00:00:30.187 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:30.187 + for nvme in "${!nvme_files[@]}" 00:00:30.187 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:00:30.187 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:30.188 + for nvme in "${!nvme_files[@]}" 00:00:30.188 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-ftl.img -s 6G 00:00:30.188 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:30.188 + for nvme in "${!nvme_files[@]}" 00:00:30.188 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:00:30.188 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:30.188 + for nvme in "${!nvme_files[@]}" 00:00:30.188 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:00:30.188 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:30.445 + for nvme in "${!nvme_files[@]}" 00:00:30.445 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:00:30.445 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:30.445 + for nvme in "${!nvme_files[@]}" 00:00:30.445 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:00:30.445 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:30.445 + for nvme in "${!nvme_files[@]}" 00:00:30.445 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:00:30.445 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:30.445 + for nvme in "${!nvme_files[@]}" 00:00:30.445 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-fdp.img -s 1G 00:00:30.445 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:30.445 + for nvme in "${!nvme_files[@]}" 00:00:30.445 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:00:30.703 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:30.703 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:00:30.703 + echo 'End stage prepare_nvme.sh' 00:00:30.703 End stage prepare_nvme.sh 00:00:30.715 [Pipeline] sh 00:00:30.992 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:30.992 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex4-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:00:30.992 00:00:30.992 DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant 00:00:30.992 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk 00:00:30.992 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_2 00:00:30.992 HELP=0 00:00:30.992 DRY_RUN=0 00:00:30.992 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,/var/lib/libvirt/images/backends/ex4-nvme-fdp.img, 00:00:30.992 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:30.992 NVME_AUTO_CREATE=0 00:00:30.992 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,, 00:00:30.992 NVME_CMB=,,,, 00:00:30.992 NVME_PMR=,,,, 00:00:30.992 NVME_ZNS=,,,, 00:00:30.992 NVME_MS=true,,,, 00:00:30.992 NVME_FDP=,,,on, 00:00:30.992 SPDK_VAGRANT_DISTRO=fedora38 00:00:30.992 SPDK_VAGRANT_VMCPU=10 00:00:30.992 SPDK_VAGRANT_VMRAM=12288 00:00:30.992 SPDK_VAGRANT_PROVIDER=libvirt 00:00:30.992 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:30.992 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:30.992 SPDK_OPENSTACK_NETWORK=0 00:00:30.992 VAGRANT_PACKAGE_BOX=0 00:00:30.993 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:00:30.993 FORCE_DISTRO=true 00:00:30.993 VAGRANT_BOX_VERSION= 00:00:30.993 EXTRA_VAGRANTFILES= 00:00:30.993 NIC_MODEL=e1000 00:00:30.993 00:00:30.993 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt' 00:00:30.993 /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest_2 00:00:34.302 Bringing machine 'default' up with 'libvirt' provider... 00:00:34.302 ==> default: Creating image (snapshot of base box volume). 00:00:34.561 ==> default: Creating domain with the following settings... 00:00:34.561 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721912186_e39dd81718e15541d928 00:00:34.561 ==> default: -- Domain type: kvm 00:00:34.561 ==> default: -- Cpus: 10 00:00:34.561 ==> default: -- Feature: acpi 00:00:34.561 ==> default: -- Feature: apic 00:00:34.561 ==> default: -- Feature: pae 00:00:34.561 ==> default: -- Memory: 12288M 00:00:34.561 ==> default: -- Memory Backing: hugepages: 00:00:34.561 ==> default: -- Management MAC: 00:00:34.561 ==> default: -- Loader: 00:00:34.561 ==> default: -- Nvram: 00:00:34.561 ==> default: -- Base box: spdk/fedora38 00:00:34.561 ==> default: -- Storage pool: default 00:00:34.561 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721912186_e39dd81718e15541d928.img (20G) 00:00:34.561 ==> default: -- Volume Cache: default 00:00:34.561 ==> default: -- Kernel: 00:00:34.561 ==> default: -- Initrd: 00:00:34.561 ==> default: -- Graphics Type: vnc 00:00:34.561 ==> default: -- Graphics Port: -1 00:00:34.561 ==> default: -- Graphics IP: 127.0.0.1 00:00:34.561 ==> default: -- Graphics Password: Not defined 00:00:34.561 ==> default: -- Video Type: cirrus 00:00:34.561 ==> default: -- Video VRAM: 9216 00:00:34.561 ==> default: -- Sound Type: 00:00:34.561 ==> default: -- Keymap: en-us 00:00:34.561 ==> default: -- TPM Path: 00:00:34.561 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:34.561 ==> default: -- Command line args: 00:00:34.561 ==> default: -> value=-device, 00:00:34.561 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:34.561 ==> default: -> value=-drive, 00:00:34.561 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:00:34.561 ==> default: -> value=-device, 00:00:34.561 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:00:34.561 ==> default: -> value=-device, 00:00:34.561 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:34.561 ==> default: -> value=-drive, 00:00:34.561 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-1-drive0, 00:00:34.561 ==> default: -> value=-device, 00:00:34.561 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:34.561 ==> default: -> value=-device, 00:00:34.561 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:00:34.561 ==> default: -> value=-drive, 00:00:34.561 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:00:34.561 ==> default: -> value=-device, 00:00:34.561 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:34.561 ==> default: -> value=-drive, 00:00:34.561 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:00:34.561 ==> default: -> value=-device, 00:00:34.561 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:34.561 ==> default: -> value=-drive, 00:00:34.561 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:00:34.561 ==> default: -> value=-device, 00:00:34.561 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:34.561 ==> default: -> value=-device, 00:00:34.561 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:00:34.561 ==> default: -> value=-device, 00:00:34.561 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:00:34.561 ==> default: -> value=-drive, 00:00:34.561 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:00:34.561 ==> default: -> value=-device, 00:00:34.561 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:34.561 ==> default: Creating shared folders metadata... 00:00:34.561 ==> default: Starting domain. 00:00:35.940 ==> default: Waiting for domain to get an IP address... 00:00:54.023 ==> default: Waiting for SSH to become available... 00:00:54.023 ==> default: Configuring and enabling network interfaces... 00:00:57.308 default: SSH address: 192.168.121.92:22 00:00:57.308 default: SSH username: vagrant 00:00:57.308 default: SSH auth method: private key 00:00:59.840 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:07.949 ==> default: Mounting SSHFS shared folder... 00:01:08.516 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:08.516 ==> default: Checking Mount.. 00:01:09.913 ==> default: Folder Successfully Mounted! 00:01:09.913 ==> default: Running provisioner: file... 00:01:10.848 default: ~/.gitconfig => .gitconfig 00:01:11.106 00:01:11.106 SUCCESS! 00:01:11.106 00:01:11.106 cd to /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt and type "vagrant ssh" to use. 00:01:11.106 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:11.106 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt" to destroy all trace of vm. 00:01:11.106 00:01:11.115 [Pipeline] } 00:01:11.133 [Pipeline] // stage 00:01:11.140 [Pipeline] dir 00:01:11.140 Running in /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt 00:01:11.141 [Pipeline] { 00:01:11.149 [Pipeline] catchError 00:01:11.150 [Pipeline] { 00:01:11.161 [Pipeline] sh 00:01:11.438 + vagrant ssh-config --host vagrant 00:01:11.438 + sed -ne /^Host/,$p 00:01:11.438 + tee ssh_conf 00:01:14.722 Host vagrant 00:01:14.722 HostName 192.168.121.92 00:01:14.722 User vagrant 00:01:14.722 Port 22 00:01:14.722 UserKnownHostsFile /dev/null 00:01:14.722 StrictHostKeyChecking no 00:01:14.722 PasswordAuthentication no 00:01:14.722 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:14.722 IdentitiesOnly yes 00:01:14.722 LogLevel FATAL 00:01:14.722 ForwardAgent yes 00:01:14.722 ForwardX11 yes 00:01:14.722 00:01:14.735 [Pipeline] withEnv 00:01:14.737 [Pipeline] { 00:01:14.753 [Pipeline] sh 00:01:15.030 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:15.030 source /etc/os-release 00:01:15.030 [[ -e /image.version ]] && img=$(< /image.version) 00:01:15.030 # Minimal, systemd-like check. 00:01:15.030 if [[ -e /.dockerenv ]]; then 00:01:15.030 # Clear garbage from the node's name: 00:01:15.030 # agt-er_autotest_547-896 -> autotest_547-896 00:01:15.030 # $HOSTNAME is the actual container id 00:01:15.030 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:15.030 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:15.030 # We can assume this is a mount from a host where container is running, 00:01:15.030 # so fetch its hostname to easily identify the target swarm worker. 00:01:15.030 container="$(< /etc/hostname) ($agent)" 00:01:15.030 else 00:01:15.030 # Fallback 00:01:15.030 container=$agent 00:01:15.030 fi 00:01:15.030 fi 00:01:15.030 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:15.030 00:01:15.299 [Pipeline] } 00:01:15.319 [Pipeline] // withEnv 00:01:15.327 [Pipeline] setCustomBuildProperty 00:01:15.342 [Pipeline] stage 00:01:15.344 [Pipeline] { (Tests) 00:01:15.363 [Pipeline] sh 00:01:15.643 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:15.916 [Pipeline] sh 00:01:16.195 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:16.467 [Pipeline] timeout 00:01:16.468 Timeout set to expire in 40 min 00:01:16.469 [Pipeline] { 00:01:16.486 [Pipeline] sh 00:01:16.764 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:17.330 HEAD is now at 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:01:17.343 [Pipeline] sh 00:01:17.623 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:17.895 [Pipeline] sh 00:01:18.175 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:18.448 [Pipeline] sh 00:01:18.727 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:01:18.727 ++ readlink -f spdk_repo 00:01:18.727 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:18.727 + [[ -n /home/vagrant/spdk_repo ]] 00:01:18.727 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:18.727 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:18.727 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:18.728 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:18.728 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:18.728 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:18.728 + cd /home/vagrant/spdk_repo 00:01:18.728 + source /etc/os-release 00:01:18.728 ++ NAME='Fedora Linux' 00:01:18.728 ++ VERSION='38 (Cloud Edition)' 00:01:18.728 ++ ID=fedora 00:01:18.728 ++ VERSION_ID=38 00:01:18.728 ++ VERSION_CODENAME= 00:01:18.728 ++ PLATFORM_ID=platform:f38 00:01:18.728 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:18.728 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:18.728 ++ LOGO=fedora-logo-icon 00:01:18.728 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:18.728 ++ HOME_URL=https://fedoraproject.org/ 00:01:18.728 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:18.728 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:18.728 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:18.728 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:18.728 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:18.728 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:18.728 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:18.728 ++ SUPPORT_END=2024-05-14 00:01:18.728 ++ VARIANT='Cloud Edition' 00:01:18.728 ++ VARIANT_ID=cloud 00:01:18.728 + uname -a 00:01:18.986 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:18.986 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:19.244 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:19.501 Hugepages 00:01:19.501 node hugesize free / total 00:01:19.501 node0 1048576kB 0 / 0 00:01:19.501 node0 2048kB 0 / 0 00:01:19.501 00:01:19.501 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:19.501 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:19.501 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:19.502 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:19.759 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:01:19.759 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:19.759 + rm -f /tmp/spdk-ld-path 00:01:19.759 + source autorun-spdk.conf 00:01:19.759 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.759 ++ SPDK_TEST_NVME=1 00:01:19.759 ++ SPDK_TEST_FTL=1 00:01:19.759 ++ SPDK_TEST_ISAL=1 00:01:19.759 ++ SPDK_RUN_ASAN=1 00:01:19.759 ++ SPDK_RUN_UBSAN=1 00:01:19.760 ++ SPDK_TEST_XNVME=1 00:01:19.760 ++ SPDK_TEST_NVME_FDP=1 00:01:19.760 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:19.760 ++ RUN_NIGHTLY=0 00:01:19.760 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:19.760 + [[ -n '' ]] 00:01:19.760 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:19.760 + for M in /var/spdk/build-*-manifest.txt 00:01:19.760 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:19.760 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:19.760 + for M in /var/spdk/build-*-manifest.txt 00:01:19.760 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:19.760 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:19.760 ++ uname 00:01:19.760 + [[ Linux == \L\i\n\u\x ]] 00:01:19.760 + sudo dmesg -T 00:01:19.760 + sudo dmesg --clear 00:01:19.760 + dmesg_pid=5192 00:01:19.760 + [[ Fedora Linux == FreeBSD ]] 00:01:19.760 + sudo dmesg -Tw 00:01:19.760 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:19.760 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:19.760 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:19.760 + [[ -x /usr/src/fio-static/fio ]] 00:01:19.760 + export FIO_BIN=/usr/src/fio-static/fio 00:01:19.760 + FIO_BIN=/usr/src/fio-static/fio 00:01:19.760 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:19.760 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:19.760 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:19.760 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:19.760 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:19.760 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:19.760 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:19.760 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:19.760 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:19.760 Test configuration: 00:01:19.760 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.760 SPDK_TEST_NVME=1 00:01:19.760 SPDK_TEST_FTL=1 00:01:19.760 SPDK_TEST_ISAL=1 00:01:19.760 SPDK_RUN_ASAN=1 00:01:19.760 SPDK_RUN_UBSAN=1 00:01:19.760 SPDK_TEST_XNVME=1 00:01:19.760 SPDK_TEST_NVME_FDP=1 00:01:19.760 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:19.760 RUN_NIGHTLY=0 12:57:11 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:19.760 12:57:11 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:19.760 12:57:11 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:19.760 12:57:11 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:19.760 12:57:11 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.760 12:57:11 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.760 12:57:11 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:20.019 12:57:11 -- paths/export.sh@5 -- $ export PATH 00:01:20.019 12:57:11 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:20.019 12:57:11 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:20.019 12:57:11 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:20.019 12:57:11 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721912231.XXXXXX 00:01:20.019 12:57:11 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721912231.QuTEY4 00:01:20.019 12:57:11 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:20.019 12:57:11 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:20.019 12:57:11 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:20.019 12:57:11 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:20.019 12:57:11 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:20.019 12:57:11 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:20.019 12:57:11 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:20.019 12:57:11 -- common/autotest_common.sh@10 -- $ set +x 00:01:20.019 12:57:11 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:20.019 12:57:11 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:20.019 12:57:11 -- pm/common@17 -- $ local monitor 00:01:20.019 12:57:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:20.019 12:57:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:20.019 12:57:11 -- pm/common@21 -- $ date +%s 00:01:20.019 12:57:11 -- pm/common@25 -- $ sleep 1 00:01:20.019 12:57:11 -- pm/common@21 -- $ date +%s 00:01:20.019 12:57:11 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721912231 00:01:20.019 12:57:11 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721912231 00:01:20.019 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721912231_collect-vmstat.pm.log 00:01:20.019 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721912231_collect-cpu-load.pm.log 00:01:20.955 12:57:12 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:20.955 12:57:12 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:20.955 12:57:12 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:20.955 12:57:12 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:20.955 12:57:12 -- spdk/autobuild.sh@16 -- $ date -u 00:01:20.955 Thu Jul 25 12:57:13 PM UTC 2024 00:01:20.955 12:57:13 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:20.955 v24.09-pre-321-g704257090 00:01:20.955 12:57:13 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:20.955 12:57:13 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:20.955 12:57:13 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:20.955 12:57:13 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:20.955 12:57:13 -- common/autotest_common.sh@10 -- $ set +x 00:01:20.955 ************************************ 00:01:20.955 START TEST asan 00:01:20.955 ************************************ 00:01:20.955 using asan 00:01:20.955 12:57:13 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:01:20.955 00:01:20.955 real 0m0.000s 00:01:20.955 user 0m0.000s 00:01:20.955 sys 0m0.000s 00:01:20.955 12:57:13 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:20.955 ************************************ 00:01:20.955 END TEST asan 00:01:20.955 ************************************ 00:01:20.955 12:57:13 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:20.955 12:57:13 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:20.955 12:57:13 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:20.955 12:57:13 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:20.955 12:57:13 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:20.955 12:57:13 -- common/autotest_common.sh@10 -- $ set +x 00:01:20.955 ************************************ 00:01:20.955 START TEST ubsan 00:01:20.955 ************************************ 00:01:20.955 using ubsan 00:01:20.955 12:57:13 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:20.955 00:01:20.955 real 0m0.000s 00:01:20.955 user 0m0.000s 00:01:20.955 sys 0m0.000s 00:01:20.955 12:57:13 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:20.955 ************************************ 00:01:20.955 END TEST ubsan 00:01:20.955 ************************************ 00:01:20.955 12:57:13 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:20.955 12:57:13 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:20.955 12:57:13 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:20.955 12:57:13 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:20.955 12:57:13 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:20.955 12:57:13 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:20.955 12:57:13 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:20.955 12:57:13 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:20.955 12:57:13 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:20.955 12:57:13 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:21.214 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:21.214 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:21.779 Using 'verbs' RDMA provider 00:01:37.617 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:47.585 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:48.151 Creating mk/config.mk...done. 00:01:48.151 Creating mk/cc.flags.mk...done. 00:01:48.151 Type 'make' to build. 00:01:48.151 12:57:40 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:01:48.151 12:57:40 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:48.151 12:57:40 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:48.151 12:57:40 -- common/autotest_common.sh@10 -- $ set +x 00:01:48.151 ************************************ 00:01:48.151 START TEST make 00:01:48.151 ************************************ 00:01:48.151 12:57:40 make -- common/autotest_common.sh@1125 -- $ make -j10 00:01:48.461 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:01:48.461 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:01:48.461 meson setup builddir \ 00:01:48.461 -Dwith-libaio=enabled \ 00:01:48.461 -Dwith-liburing=enabled \ 00:01:48.461 -Dwith-libvfn=disabled \ 00:01:48.461 -Dwith-spdk=false && \ 00:01:48.461 meson compile -C builddir && \ 00:01:48.461 cd -) 00:01:48.461 make[1]: Nothing to be done for 'all'. 00:01:51.086 The Meson build system 00:01:51.086 Version: 1.3.1 00:01:51.086 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:01:51.086 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:01:51.086 Build type: native build 00:01:51.086 Project name: xnvme 00:01:51.086 Project version: 0.7.3 00:01:51.086 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:51.086 C linker for the host machine: cc ld.bfd 2.39-16 00:01:51.086 Host machine cpu family: x86_64 00:01:51.086 Host machine cpu: x86_64 00:01:51.086 Message: host_machine.system: linux 00:01:51.086 Compiler for C supports arguments -Wno-missing-braces: YES 00:01:51.086 Compiler for C supports arguments -Wno-cast-function-type: YES 00:01:51.086 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:51.086 Run-time dependency threads found: YES 00:01:51.086 Has header "setupapi.h" : NO 00:01:51.086 Has header "linux/blkzoned.h" : YES 00:01:51.086 Has header "linux/blkzoned.h" : YES (cached) 00:01:51.086 Has header "libaio.h" : YES 00:01:51.086 Library aio found: YES 00:01:51.086 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:51.086 Run-time dependency liburing found: YES 2.2 00:01:51.086 Dependency libvfn skipped: feature with-libvfn disabled 00:01:51.086 Run-time dependency appleframeworks found: NO (tried framework) 00:01:51.086 Run-time dependency appleframeworks found: NO (tried framework) 00:01:51.086 Configuring xnvme_config.h using configuration 00:01:51.086 Configuring xnvme.spec using configuration 00:01:51.086 Run-time dependency bash-completion found: YES 2.11 00:01:51.086 Message: Bash-completions: /usr/share/bash-completion/completions 00:01:51.086 Program cp found: YES (/usr/bin/cp) 00:01:51.086 Has header "winsock2.h" : NO 00:01:51.086 Has header "dbghelp.h" : NO 00:01:51.086 Library rpcrt4 found: NO 00:01:51.086 Library rt found: YES 00:01:51.086 Checking for function "clock_gettime" with dependency -lrt: YES 00:01:51.086 Found CMake: /usr/bin/cmake (3.27.7) 00:01:51.086 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:01:51.086 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:01:51.086 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:01:51.086 Build targets in project: 32 00:01:51.086 00:01:51.086 xnvme 0.7.3 00:01:51.086 00:01:51.086 User defined options 00:01:51.086 with-libaio : enabled 00:01:51.086 with-liburing: enabled 00:01:51.086 with-libvfn : disabled 00:01:51.086 with-spdk : false 00:01:51.086 00:01:51.086 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:51.652 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:01:51.652 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:01:51.652 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:01:51.652 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:01:51.652 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:01:51.652 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:01:51.909 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:01:51.909 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:01:51.909 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:01:51.909 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:01:51.909 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:01:51.909 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:01:51.909 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:01:51.909 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:01:51.909 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:01:51.909 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:01:51.909 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:01:51.909 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:01:51.909 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:01:51.909 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:01:51.909 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:01:52.167 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:01:52.167 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:01:52.167 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:01:52.167 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:01:52.167 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:01:52.167 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:01:52.167 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:01:52.167 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:01:52.167 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:01:52.167 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:01:52.167 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:01:52.167 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:01:52.167 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:01:52.167 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:01:52.167 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:01:52.167 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:01:52.167 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:01:52.167 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:01:52.167 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:01:52.167 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:01:52.167 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:01:52.167 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:01:52.167 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:01:52.167 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:01:52.167 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:01:52.167 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:01:52.167 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:01:52.167 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:01:52.167 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:01:52.167 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:01:52.167 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:01:52.167 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:01:52.167 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:01:52.425 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:01:52.425 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:01:52.425 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:01:52.425 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:01:52.425 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:01:52.425 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:01:52.425 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:01:52.425 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:01:52.425 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:01:52.425 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:01:52.425 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:01:52.425 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:01:52.425 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:01:52.425 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:01:52.425 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:01:52.683 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:01:52.683 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:01:52.683 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:01:52.683 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:01:52.683 [73/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:01:52.683 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:01:52.683 [75/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:01:52.683 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:01:52.683 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:01:52.683 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:01:52.683 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:01:52.683 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:01:52.683 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:01:52.683 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:01:52.683 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:01:52.683 [84/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:01:52.941 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:01:52.941 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:01:52.941 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:01:52.941 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:01:52.941 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:01:52.941 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:01:52.941 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:01:52.941 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:01:52.941 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:01:52.941 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:01:52.941 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:01:52.941 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:01:52.941 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:01:52.941 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:01:52.941 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:01:52.941 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:01:52.941 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:01:52.941 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:01:52.941 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:01:52.941 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:01:52.941 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:01:52.941 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:01:52.941 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:01:52.941 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:01:52.941 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:01:53.199 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:01:53.199 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:01:53.199 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:01:53.199 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:01:53.199 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:01:53.199 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:01:53.199 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:01:53.199 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:01:53.199 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:01:53.199 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:01:53.199 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:01:53.199 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:01:53.199 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:01:53.199 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:01:53.199 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:01:53.199 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:01:53.199 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:01:53.199 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:01:53.199 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:01:53.199 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:01:53.199 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:01:53.199 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:01:53.199 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:01:53.456 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:01:53.457 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:01:53.457 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:01:53.457 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:01:53.457 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:01:53.457 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:01:53.457 [139/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:01:53.457 [140/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:01:53.457 [141/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:01:53.457 [142/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:01:53.457 [143/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:01:53.457 [144/203] Linking target lib/libxnvme.so 00:01:53.714 [145/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:01:53.714 [146/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:01:53.714 [147/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:01:53.714 [148/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:01:53.714 [149/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:01:53.714 [150/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:01:53.714 [151/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:01:53.714 [152/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:01:53.714 [153/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:01:53.714 [154/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:01:53.714 [155/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:01:53.714 [156/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:01:53.714 [157/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:01:53.714 [158/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:01:53.971 [159/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:01:53.971 [160/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:01:53.971 [161/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:01:53.971 [162/203] Compiling C object tools/xdd.p/xdd.c.o 00:01:53.971 [163/203] Compiling C object tools/kvs.p/kvs.c.o 00:01:53.971 [164/203] Compiling C object tools/lblk.p/lblk.c.o 00:01:53.971 [165/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:01:53.971 [166/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:01:53.971 [167/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:01:53.971 [168/203] Compiling C object tools/zoned.p/zoned.c.o 00:01:53.971 [169/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:01:53.971 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:01:54.229 [171/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:01:54.229 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:01:54.229 [173/203] Linking static target lib/libxnvme.a 00:01:54.229 [174/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:01:54.229 [175/203] Linking target tests/xnvme_tests_enum 00:01:54.229 [176/203] Linking target tests/xnvme_tests_cli 00:01:54.229 [177/203] Linking target tests/xnvme_tests_znd_append 00:01:54.229 [178/203] Linking target tests/xnvme_tests_async_intf 00:01:54.229 [179/203] Linking target tests/xnvme_tests_buf 00:01:54.229 [180/203] Linking target tests/xnvme_tests_lblk 00:01:54.229 [181/203] Linking target tests/xnvme_tests_scc 00:01:54.229 [182/203] Linking target tests/xnvme_tests_xnvme_file 00:01:54.229 [183/203] Linking target tests/xnvme_tests_znd_explicit_open 00:01:54.229 [184/203] Linking target tests/xnvme_tests_ioworker 00:01:54.229 [185/203] Linking target tests/xnvme_tests_xnvme_cli 00:01:54.229 [186/203] Linking target tests/xnvme_tests_znd_state 00:01:54.229 [187/203] Linking target tests/xnvme_tests_kvs 00:01:54.229 [188/203] Linking target tests/xnvme_tests_map 00:01:54.229 [189/203] Linking target tools/xdd 00:01:54.229 [190/203] Linking target tests/xnvme_tests_znd_zrwa 00:01:54.229 [191/203] Linking target tools/xnvme 00:01:54.229 [192/203] Linking target examples/xnvme_hello 00:01:54.229 [193/203] Linking target examples/xnvme_enum 00:01:54.229 [194/203] Linking target examples/xnvme_dev 00:01:54.229 [195/203] Linking target tools/lblk 00:01:54.229 [196/203] Linking target tools/xnvme_file 00:01:54.229 [197/203] Linking target tools/zoned 00:01:54.229 [198/203] Linking target examples/xnvme_io_async 00:01:54.229 [199/203] Linking target tools/kvs 00:01:54.229 [200/203] Linking target examples/xnvme_single_async 00:01:54.229 [201/203] Linking target examples/zoned_io_async 00:01:54.229 [202/203] Linking target examples/xnvme_single_sync 00:01:54.486 [203/203] Linking target examples/zoned_io_sync 00:01:54.486 INFO: autodetecting backend as ninja 00:01:54.486 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:01:54.486 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:02.589 The Meson build system 00:02:02.589 Version: 1.3.1 00:02:02.589 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:02.589 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:02.589 Build type: native build 00:02:02.589 Program cat found: YES (/usr/bin/cat) 00:02:02.589 Project name: DPDK 00:02:02.589 Project version: 24.03.0 00:02:02.589 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:02.589 C linker for the host machine: cc ld.bfd 2.39-16 00:02:02.589 Host machine cpu family: x86_64 00:02:02.589 Host machine cpu: x86_64 00:02:02.589 Message: ## Building in Developer Mode ## 00:02:02.589 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:02.589 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:02.589 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:02.589 Program python3 found: YES (/usr/bin/python3) 00:02:02.589 Program cat found: YES (/usr/bin/cat) 00:02:02.589 Compiler for C supports arguments -march=native: YES 00:02:02.589 Checking for size of "void *" : 8 00:02:02.589 Checking for size of "void *" : 8 (cached) 00:02:02.589 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:02.589 Library m found: YES 00:02:02.589 Library numa found: YES 00:02:02.589 Has header "numaif.h" : YES 00:02:02.589 Library fdt found: NO 00:02:02.589 Library execinfo found: NO 00:02:02.589 Has header "execinfo.h" : YES 00:02:02.589 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:02.589 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:02.589 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:02.589 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:02.589 Run-time dependency openssl found: YES 3.0.9 00:02:02.589 Run-time dependency libpcap found: YES 1.10.4 00:02:02.589 Has header "pcap.h" with dependency libpcap: YES 00:02:02.590 Compiler for C supports arguments -Wcast-qual: YES 00:02:02.590 Compiler for C supports arguments -Wdeprecated: YES 00:02:02.590 Compiler for C supports arguments -Wformat: YES 00:02:02.590 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:02.590 Compiler for C supports arguments -Wformat-security: NO 00:02:02.590 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:02.590 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:02.590 Compiler for C supports arguments -Wnested-externs: YES 00:02:02.590 Compiler for C supports arguments -Wold-style-definition: YES 00:02:02.590 Compiler for C supports arguments -Wpointer-arith: YES 00:02:02.590 Compiler for C supports arguments -Wsign-compare: YES 00:02:02.590 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:02.590 Compiler for C supports arguments -Wundef: YES 00:02:02.590 Compiler for C supports arguments -Wwrite-strings: YES 00:02:02.590 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:02.590 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:02.590 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:02.590 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:02.590 Program objdump found: YES (/usr/bin/objdump) 00:02:02.590 Compiler for C supports arguments -mavx512f: YES 00:02:02.590 Checking if "AVX512 checking" compiles: YES 00:02:02.590 Fetching value of define "__SSE4_2__" : 1 00:02:02.590 Fetching value of define "__AES__" : 1 00:02:02.590 Fetching value of define "__AVX__" : 1 00:02:02.590 Fetching value of define "__AVX2__" : 1 00:02:02.590 Fetching value of define "__AVX512BW__" : (undefined) 00:02:02.590 Fetching value of define "__AVX512CD__" : (undefined) 00:02:02.590 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:02.590 Fetching value of define "__AVX512F__" : (undefined) 00:02:02.590 Fetching value of define "__AVX512VL__" : (undefined) 00:02:02.590 Fetching value of define "__PCLMUL__" : 1 00:02:02.590 Fetching value of define "__RDRND__" : 1 00:02:02.590 Fetching value of define "__RDSEED__" : 1 00:02:02.590 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:02.590 Fetching value of define "__znver1__" : (undefined) 00:02:02.590 Fetching value of define "__znver2__" : (undefined) 00:02:02.590 Fetching value of define "__znver3__" : (undefined) 00:02:02.590 Fetching value of define "__znver4__" : (undefined) 00:02:02.590 Library asan found: YES 00:02:02.590 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:02.590 Message: lib/log: Defining dependency "log" 00:02:02.590 Message: lib/kvargs: Defining dependency "kvargs" 00:02:02.590 Message: lib/telemetry: Defining dependency "telemetry" 00:02:02.590 Library rt found: YES 00:02:02.590 Checking for function "getentropy" : NO 00:02:02.590 Message: lib/eal: Defining dependency "eal" 00:02:02.590 Message: lib/ring: Defining dependency "ring" 00:02:02.590 Message: lib/rcu: Defining dependency "rcu" 00:02:02.590 Message: lib/mempool: Defining dependency "mempool" 00:02:02.590 Message: lib/mbuf: Defining dependency "mbuf" 00:02:02.590 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:02.590 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:02.590 Compiler for C supports arguments -mpclmul: YES 00:02:02.590 Compiler for C supports arguments -maes: YES 00:02:02.590 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:02.590 Compiler for C supports arguments -mavx512bw: YES 00:02:02.590 Compiler for C supports arguments -mavx512dq: YES 00:02:02.590 Compiler for C supports arguments -mavx512vl: YES 00:02:02.590 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:02.590 Compiler for C supports arguments -mavx2: YES 00:02:02.590 Compiler for C supports arguments -mavx: YES 00:02:02.590 Message: lib/net: Defining dependency "net" 00:02:02.590 Message: lib/meter: Defining dependency "meter" 00:02:02.590 Message: lib/ethdev: Defining dependency "ethdev" 00:02:02.590 Message: lib/pci: Defining dependency "pci" 00:02:02.590 Message: lib/cmdline: Defining dependency "cmdline" 00:02:02.590 Message: lib/hash: Defining dependency "hash" 00:02:02.590 Message: lib/timer: Defining dependency "timer" 00:02:02.590 Message: lib/compressdev: Defining dependency "compressdev" 00:02:02.590 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:02.590 Message: lib/dmadev: Defining dependency "dmadev" 00:02:02.590 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:02.590 Message: lib/power: Defining dependency "power" 00:02:02.590 Message: lib/reorder: Defining dependency "reorder" 00:02:02.590 Message: lib/security: Defining dependency "security" 00:02:02.590 Has header "linux/userfaultfd.h" : YES 00:02:02.590 Has header "linux/vduse.h" : YES 00:02:02.590 Message: lib/vhost: Defining dependency "vhost" 00:02:02.590 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:02.590 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:02.590 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:02.590 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:02.590 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:02.590 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:02.590 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:02.590 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:02.590 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:02.590 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:02.590 Program doxygen found: YES (/usr/bin/doxygen) 00:02:02.590 Configuring doxy-api-html.conf using configuration 00:02:02.590 Configuring doxy-api-man.conf using configuration 00:02:02.590 Program mandb found: YES (/usr/bin/mandb) 00:02:02.590 Program sphinx-build found: NO 00:02:02.590 Configuring rte_build_config.h using configuration 00:02:02.590 Message: 00:02:02.590 ================= 00:02:02.590 Applications Enabled 00:02:02.590 ================= 00:02:02.590 00:02:02.590 apps: 00:02:02.590 00:02:02.590 00:02:02.590 Message: 00:02:02.590 ================= 00:02:02.590 Libraries Enabled 00:02:02.590 ================= 00:02:02.590 00:02:02.590 libs: 00:02:02.590 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:02.590 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:02.590 cryptodev, dmadev, power, reorder, security, vhost, 00:02:02.590 00:02:02.590 Message: 00:02:02.590 =============== 00:02:02.590 Drivers Enabled 00:02:02.590 =============== 00:02:02.590 00:02:02.590 common: 00:02:02.590 00:02:02.590 bus: 00:02:02.590 pci, vdev, 00:02:02.590 mempool: 00:02:02.590 ring, 00:02:02.590 dma: 00:02:02.590 00:02:02.590 net: 00:02:02.590 00:02:02.590 crypto: 00:02:02.590 00:02:02.590 compress: 00:02:02.590 00:02:02.590 vdpa: 00:02:02.590 00:02:02.590 00:02:02.590 Message: 00:02:02.590 ================= 00:02:02.590 Content Skipped 00:02:02.590 ================= 00:02:02.590 00:02:02.590 apps: 00:02:02.590 dumpcap: explicitly disabled via build config 00:02:02.590 graph: explicitly disabled via build config 00:02:02.590 pdump: explicitly disabled via build config 00:02:02.590 proc-info: explicitly disabled via build config 00:02:02.590 test-acl: explicitly disabled via build config 00:02:02.590 test-bbdev: explicitly disabled via build config 00:02:02.590 test-cmdline: explicitly disabled via build config 00:02:02.590 test-compress-perf: explicitly disabled via build config 00:02:02.590 test-crypto-perf: explicitly disabled via build config 00:02:02.590 test-dma-perf: explicitly disabled via build config 00:02:02.590 test-eventdev: explicitly disabled via build config 00:02:02.590 test-fib: explicitly disabled via build config 00:02:02.590 test-flow-perf: explicitly disabled via build config 00:02:02.590 test-gpudev: explicitly disabled via build config 00:02:02.590 test-mldev: explicitly disabled via build config 00:02:02.590 test-pipeline: explicitly disabled via build config 00:02:02.590 test-pmd: explicitly disabled via build config 00:02:02.590 test-regex: explicitly disabled via build config 00:02:02.590 test-sad: explicitly disabled via build config 00:02:02.590 test-security-perf: explicitly disabled via build config 00:02:02.590 00:02:02.590 libs: 00:02:02.590 argparse: explicitly disabled via build config 00:02:02.590 metrics: explicitly disabled via build config 00:02:02.590 acl: explicitly disabled via build config 00:02:02.590 bbdev: explicitly disabled via build config 00:02:02.590 bitratestats: explicitly disabled via build config 00:02:02.590 bpf: explicitly disabled via build config 00:02:02.590 cfgfile: explicitly disabled via build config 00:02:02.590 distributor: explicitly disabled via build config 00:02:02.590 efd: explicitly disabled via build config 00:02:02.590 eventdev: explicitly disabled via build config 00:02:02.590 dispatcher: explicitly disabled via build config 00:02:02.590 gpudev: explicitly disabled via build config 00:02:02.590 gro: explicitly disabled via build config 00:02:02.590 gso: explicitly disabled via build config 00:02:02.590 ip_frag: explicitly disabled via build config 00:02:02.590 jobstats: explicitly disabled via build config 00:02:02.590 latencystats: explicitly disabled via build config 00:02:02.590 lpm: explicitly disabled via build config 00:02:02.590 member: explicitly disabled via build config 00:02:02.590 pcapng: explicitly disabled via build config 00:02:02.590 rawdev: explicitly disabled via build config 00:02:02.590 regexdev: explicitly disabled via build config 00:02:02.590 mldev: explicitly disabled via build config 00:02:02.590 rib: explicitly disabled via build config 00:02:02.590 sched: explicitly disabled via build config 00:02:02.590 stack: explicitly disabled via build config 00:02:02.590 ipsec: explicitly disabled via build config 00:02:02.590 pdcp: explicitly disabled via build config 00:02:02.590 fib: explicitly disabled via build config 00:02:02.590 port: explicitly disabled via build config 00:02:02.591 pdump: explicitly disabled via build config 00:02:02.591 table: explicitly disabled via build config 00:02:02.591 pipeline: explicitly disabled via build config 00:02:02.591 graph: explicitly disabled via build config 00:02:02.591 node: explicitly disabled via build config 00:02:02.591 00:02:02.591 drivers: 00:02:02.591 common/cpt: not in enabled drivers build config 00:02:02.591 common/dpaax: not in enabled drivers build config 00:02:02.591 common/iavf: not in enabled drivers build config 00:02:02.591 common/idpf: not in enabled drivers build config 00:02:02.591 common/ionic: not in enabled drivers build config 00:02:02.591 common/mvep: not in enabled drivers build config 00:02:02.591 common/octeontx: not in enabled drivers build config 00:02:02.591 bus/auxiliary: not in enabled drivers build config 00:02:02.591 bus/cdx: not in enabled drivers build config 00:02:02.591 bus/dpaa: not in enabled drivers build config 00:02:02.591 bus/fslmc: not in enabled drivers build config 00:02:02.591 bus/ifpga: not in enabled drivers build config 00:02:02.591 bus/platform: not in enabled drivers build config 00:02:02.591 bus/uacce: not in enabled drivers build config 00:02:02.591 bus/vmbus: not in enabled drivers build config 00:02:02.591 common/cnxk: not in enabled drivers build config 00:02:02.591 common/mlx5: not in enabled drivers build config 00:02:02.591 common/nfp: not in enabled drivers build config 00:02:02.591 common/nitrox: not in enabled drivers build config 00:02:02.591 common/qat: not in enabled drivers build config 00:02:02.591 common/sfc_efx: not in enabled drivers build config 00:02:02.591 mempool/bucket: not in enabled drivers build config 00:02:02.591 mempool/cnxk: not in enabled drivers build config 00:02:02.591 mempool/dpaa: not in enabled drivers build config 00:02:02.591 mempool/dpaa2: not in enabled drivers build config 00:02:02.591 mempool/octeontx: not in enabled drivers build config 00:02:02.591 mempool/stack: not in enabled drivers build config 00:02:02.591 dma/cnxk: not in enabled drivers build config 00:02:02.591 dma/dpaa: not in enabled drivers build config 00:02:02.591 dma/dpaa2: not in enabled drivers build config 00:02:02.591 dma/hisilicon: not in enabled drivers build config 00:02:02.591 dma/idxd: not in enabled drivers build config 00:02:02.591 dma/ioat: not in enabled drivers build config 00:02:02.591 dma/skeleton: not in enabled drivers build config 00:02:02.591 net/af_packet: not in enabled drivers build config 00:02:02.591 net/af_xdp: not in enabled drivers build config 00:02:02.591 net/ark: not in enabled drivers build config 00:02:02.591 net/atlantic: not in enabled drivers build config 00:02:02.591 net/avp: not in enabled drivers build config 00:02:02.591 net/axgbe: not in enabled drivers build config 00:02:02.591 net/bnx2x: not in enabled drivers build config 00:02:02.591 net/bnxt: not in enabled drivers build config 00:02:02.591 net/bonding: not in enabled drivers build config 00:02:02.591 net/cnxk: not in enabled drivers build config 00:02:02.591 net/cpfl: not in enabled drivers build config 00:02:02.591 net/cxgbe: not in enabled drivers build config 00:02:02.591 net/dpaa: not in enabled drivers build config 00:02:02.591 net/dpaa2: not in enabled drivers build config 00:02:02.591 net/e1000: not in enabled drivers build config 00:02:02.591 net/ena: not in enabled drivers build config 00:02:02.591 net/enetc: not in enabled drivers build config 00:02:02.591 net/enetfec: not in enabled drivers build config 00:02:02.591 net/enic: not in enabled drivers build config 00:02:02.591 net/failsafe: not in enabled drivers build config 00:02:02.591 net/fm10k: not in enabled drivers build config 00:02:02.591 net/gve: not in enabled drivers build config 00:02:02.591 net/hinic: not in enabled drivers build config 00:02:02.591 net/hns3: not in enabled drivers build config 00:02:02.591 net/i40e: not in enabled drivers build config 00:02:02.591 net/iavf: not in enabled drivers build config 00:02:02.591 net/ice: not in enabled drivers build config 00:02:02.591 net/idpf: not in enabled drivers build config 00:02:02.591 net/igc: not in enabled drivers build config 00:02:02.591 net/ionic: not in enabled drivers build config 00:02:02.591 net/ipn3ke: not in enabled drivers build config 00:02:02.591 net/ixgbe: not in enabled drivers build config 00:02:02.591 net/mana: not in enabled drivers build config 00:02:02.591 net/memif: not in enabled drivers build config 00:02:02.591 net/mlx4: not in enabled drivers build config 00:02:02.591 net/mlx5: not in enabled drivers build config 00:02:02.591 net/mvneta: not in enabled drivers build config 00:02:02.591 net/mvpp2: not in enabled drivers build config 00:02:02.591 net/netvsc: not in enabled drivers build config 00:02:02.591 net/nfb: not in enabled drivers build config 00:02:02.591 net/nfp: not in enabled drivers build config 00:02:02.591 net/ngbe: not in enabled drivers build config 00:02:02.591 net/null: not in enabled drivers build config 00:02:02.591 net/octeontx: not in enabled drivers build config 00:02:02.591 net/octeon_ep: not in enabled drivers build config 00:02:02.591 net/pcap: not in enabled drivers build config 00:02:02.591 net/pfe: not in enabled drivers build config 00:02:02.591 net/qede: not in enabled drivers build config 00:02:02.591 net/ring: not in enabled drivers build config 00:02:02.591 net/sfc: not in enabled drivers build config 00:02:02.591 net/softnic: not in enabled drivers build config 00:02:02.591 net/tap: not in enabled drivers build config 00:02:02.591 net/thunderx: not in enabled drivers build config 00:02:02.591 net/txgbe: not in enabled drivers build config 00:02:02.591 net/vdev_netvsc: not in enabled drivers build config 00:02:02.591 net/vhost: not in enabled drivers build config 00:02:02.591 net/virtio: not in enabled drivers build config 00:02:02.591 net/vmxnet3: not in enabled drivers build config 00:02:02.591 raw/*: missing internal dependency, "rawdev" 00:02:02.591 crypto/armv8: not in enabled drivers build config 00:02:02.591 crypto/bcmfs: not in enabled drivers build config 00:02:02.591 crypto/caam_jr: not in enabled drivers build config 00:02:02.591 crypto/ccp: not in enabled drivers build config 00:02:02.591 crypto/cnxk: not in enabled drivers build config 00:02:02.591 crypto/dpaa_sec: not in enabled drivers build config 00:02:02.591 crypto/dpaa2_sec: not in enabled drivers build config 00:02:02.591 crypto/ipsec_mb: not in enabled drivers build config 00:02:02.591 crypto/mlx5: not in enabled drivers build config 00:02:02.591 crypto/mvsam: not in enabled drivers build config 00:02:02.591 crypto/nitrox: not in enabled drivers build config 00:02:02.591 crypto/null: not in enabled drivers build config 00:02:02.591 crypto/octeontx: not in enabled drivers build config 00:02:02.591 crypto/openssl: not in enabled drivers build config 00:02:02.591 crypto/scheduler: not in enabled drivers build config 00:02:02.591 crypto/uadk: not in enabled drivers build config 00:02:02.591 crypto/virtio: not in enabled drivers build config 00:02:02.591 compress/isal: not in enabled drivers build config 00:02:02.591 compress/mlx5: not in enabled drivers build config 00:02:02.591 compress/nitrox: not in enabled drivers build config 00:02:02.591 compress/octeontx: not in enabled drivers build config 00:02:02.591 compress/zlib: not in enabled drivers build config 00:02:02.591 regex/*: missing internal dependency, "regexdev" 00:02:02.591 ml/*: missing internal dependency, "mldev" 00:02:02.591 vdpa/ifc: not in enabled drivers build config 00:02:02.591 vdpa/mlx5: not in enabled drivers build config 00:02:02.591 vdpa/nfp: not in enabled drivers build config 00:02:02.591 vdpa/sfc: not in enabled drivers build config 00:02:02.591 event/*: missing internal dependency, "eventdev" 00:02:02.591 baseband/*: missing internal dependency, "bbdev" 00:02:02.591 gpu/*: missing internal dependency, "gpudev" 00:02:02.591 00:02:02.591 00:02:02.591 Build targets in project: 85 00:02:02.591 00:02:02.591 DPDK 24.03.0 00:02:02.591 00:02:02.591 User defined options 00:02:02.591 buildtype : debug 00:02:02.591 default_library : shared 00:02:02.591 libdir : lib 00:02:02.591 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:02.591 b_sanitize : address 00:02:02.591 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:02.591 c_link_args : 00:02:02.591 cpu_instruction_set: native 00:02:02.591 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:02.591 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:02.591 enable_docs : false 00:02:02.591 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:02.591 enable_kmods : false 00:02:02.591 max_lcores : 128 00:02:02.591 tests : false 00:02:02.591 00:02:02.591 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:02.849 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:02.849 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:02.849 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:02.849 [3/268] Linking static target lib/librte_kvargs.a 00:02:02.849 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:02.849 [5/268] Linking static target lib/librte_log.a 00:02:03.107 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:03.364 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.364 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:03.364 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:03.622 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:03.622 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:03.879 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:03.879 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:03.879 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:03.879 [15/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.879 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:03.879 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:03.879 [18/268] Linking static target lib/librte_telemetry.a 00:02:03.879 [19/268] Linking target lib/librte_log.so.24.1 00:02:04.137 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:04.137 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:04.407 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:04.407 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:04.407 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:04.407 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:04.682 [26/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:04.682 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:04.682 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:04.682 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:04.939 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:04.939 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.939 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:04.939 [33/268] Linking target lib/librte_telemetry.so.24.1 00:02:05.197 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:05.197 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:05.197 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:05.197 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:05.456 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:05.456 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:05.456 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:05.456 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:05.456 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:05.456 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:05.714 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:05.972 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:05.972 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:05.972 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:06.230 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:06.230 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:06.230 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:06.230 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:06.487 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:06.487 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:06.487 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:06.487 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:07.053 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:07.053 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:07.053 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:07.053 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:07.053 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:07.311 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:07.311 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:07.311 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:07.311 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:07.311 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:07.311 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:07.876 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:07.876 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:08.133 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:08.133 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:08.133 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:08.133 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:08.133 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:08.133 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:08.390 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:08.390 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:08.390 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:08.390 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:08.390 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:08.647 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:08.647 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:08.647 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:08.905 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:08.905 [84/268] Linking static target lib/librte_ring.a 00:02:09.162 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:09.162 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:09.162 [87/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:09.162 [88/268] Linking static target lib/librte_eal.a 00:02:09.162 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:09.162 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:09.420 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:09.420 [92/268] Linking static target lib/librte_mempool.a 00:02:09.420 [93/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:09.420 [94/268] Linking static target lib/librte_rcu.a 00:02:09.420 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:09.420 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.679 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:09.679 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:09.936 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:09.936 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:09.936 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:09.936 [102/268] Linking static target lib/librte_mbuf.a 00:02:09.936 [103/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.194 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:10.194 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:10.194 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:10.194 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:10.452 [108/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:10.452 [109/268] Linking static target lib/librte_net.a 00:02:10.452 [110/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.711 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:10.711 [112/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:10.711 [113/268] Linking static target lib/librte_meter.a 00:02:10.711 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:10.968 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.968 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:10.968 [117/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.226 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:11.226 [119/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.483 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:11.741 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:12.000 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:12.000 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:12.000 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:12.000 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:12.000 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:12.000 [127/268] Linking static target lib/librte_pci.a 00:02:12.257 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:12.257 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:12.515 [130/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.515 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:12.515 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:12.515 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:12.515 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:12.772 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:12.772 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:12.772 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:12.772 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:12.772 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:12.772 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:12.772 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:12.772 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:13.041 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:13.041 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:13.041 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:13.041 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:13.041 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:13.041 [148/268] Linking static target lib/librte_cmdline.a 00:02:13.309 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:13.567 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:13.567 [151/268] Linking static target lib/librte_ethdev.a 00:02:13.567 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:13.567 [153/268] Linking static target lib/librte_timer.a 00:02:13.825 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:13.825 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:13.825 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:14.083 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:14.340 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:14.340 [159/268] Linking static target lib/librte_compressdev.a 00:02:14.340 [160/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.340 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:14.340 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:14.598 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:14.598 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:14.598 [165/268] Linking static target lib/librte_hash.a 00:02:14.598 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:14.856 [167/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.856 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:14.856 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:14.856 [170/268] Linking static target lib/librte_dmadev.a 00:02:14.856 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:15.114 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:15.114 [173/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.114 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:15.372 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:15.372 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:15.630 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.630 [178/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.630 [179/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:15.630 [180/268] Linking static target lib/librte_cryptodev.a 00:02:15.630 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:15.630 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:15.630 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:15.630 [184/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:16.197 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:16.197 [186/268] Linking static target lib/librte_power.a 00:02:16.197 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:16.197 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:16.455 [189/268] Linking static target lib/librte_reorder.a 00:02:16.455 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:16.455 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:16.455 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:16.455 [193/268] Linking static target lib/librte_security.a 00:02:16.713 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.713 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:16.713 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.971 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.971 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:17.228 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:17.484 [200/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.484 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:17.484 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:17.484 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:17.484 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:17.484 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:18.046 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:18.046 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:18.046 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:18.046 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:18.046 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:18.046 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:18.303 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:18.303 [213/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:18.303 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:18.303 [215/268] Linking static target drivers/librte_bus_vdev.a 00:02:18.303 [216/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:18.303 [217/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:18.303 [218/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:18.303 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:18.303 [220/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:18.303 [221/268] Linking static target drivers/librte_bus_pci.a 00:02:18.561 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.561 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:18.561 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:18.561 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:18.561 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:18.818 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.384 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.384 [229/268] Linking target lib/librte_eal.so.24.1 00:02:19.642 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:19.642 [231/268] Linking target lib/librte_meter.so.24.1 00:02:19.642 [232/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:19.642 [233/268] Linking target lib/librte_ring.so.24.1 00:02:19.642 [234/268] Linking target lib/librte_dmadev.so.24.1 00:02:19.642 [235/268] Linking target lib/librte_timer.so.24.1 00:02:19.642 [236/268] Linking target lib/librte_pci.so.24.1 00:02:19.900 [237/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:19.900 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:19.900 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:19.900 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:19.900 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:19.900 [242/268] Linking target lib/librte_rcu.so.24.1 00:02:19.900 [243/268] Linking target lib/librte_mempool.so.24.1 00:02:19.900 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:19.900 [245/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:19.900 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:19.900 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:20.158 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:20.158 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:20.158 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:20.158 [251/268] Linking target lib/librte_reorder.so.24.1 00:02:20.158 [252/268] Linking target lib/librte_compressdev.so.24.1 00:02:20.158 [253/268] Linking target lib/librte_net.so.24.1 00:02:20.416 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:02:20.416 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:20.416 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:20.416 [257/268] Linking target lib/librte_hash.so.24.1 00:02:20.416 [258/268] Linking target lib/librte_cmdline.so.24.1 00:02:20.416 [259/268] Linking target lib/librte_security.so.24.1 00:02:20.416 [260/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.674 [261/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:20.674 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:20.674 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:20.932 [264/268] Linking target lib/librte_power.so.24.1 00:02:23.463 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:23.463 [266/268] Linking static target lib/librte_vhost.a 00:02:25.362 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.362 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:25.362 INFO: autodetecting backend as ninja 00:02:25.362 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:26.296 CC lib/log/log.o 00:02:26.296 CC lib/log/log_flags.o 00:02:26.296 CC lib/log/log_deprecated.o 00:02:26.296 CC lib/ut/ut.o 00:02:26.297 CC lib/ut_mock/mock.o 00:02:26.297 LIB libspdk_ut.a 00:02:26.297 LIB libspdk_ut_mock.a 00:02:26.297 LIB libspdk_log.a 00:02:26.297 SO libspdk_ut.so.2.0 00:02:26.297 SO libspdk_ut_mock.so.6.0 00:02:26.554 SO libspdk_log.so.7.0 00:02:26.554 SYMLINK libspdk_ut.so 00:02:26.554 SYMLINK libspdk_ut_mock.so 00:02:26.554 SYMLINK libspdk_log.so 00:02:26.812 CC lib/util/base64.o 00:02:26.812 CC lib/util/bit_array.o 00:02:26.812 CC lib/util/cpuset.o 00:02:26.812 CC lib/util/crc32.o 00:02:26.812 CC lib/util/crc16.o 00:02:26.812 CC lib/util/crc32c.o 00:02:26.812 CC lib/dma/dma.o 00:02:26.812 CC lib/ioat/ioat.o 00:02:26.812 CXX lib/trace_parser/trace.o 00:02:26.812 CC lib/vfio_user/host/vfio_user_pci.o 00:02:26.812 CC lib/vfio_user/host/vfio_user.o 00:02:26.812 CC lib/util/crc32_ieee.o 00:02:26.812 CC lib/util/crc64.o 00:02:26.812 CC lib/util/dif.o 00:02:27.069 LIB libspdk_dma.a 00:02:27.069 CC lib/util/fd.o 00:02:27.069 SO libspdk_dma.so.4.0 00:02:27.069 CC lib/util/fd_group.o 00:02:27.069 CC lib/util/file.o 00:02:27.069 SYMLINK libspdk_dma.so 00:02:27.069 CC lib/util/hexlify.o 00:02:27.069 CC lib/util/iov.o 00:02:27.069 LIB libspdk_ioat.a 00:02:27.069 CC lib/util/math.o 00:02:27.069 SO libspdk_ioat.so.7.0 00:02:27.069 CC lib/util/net.o 00:02:27.327 LIB libspdk_vfio_user.a 00:02:27.327 SYMLINK libspdk_ioat.so 00:02:27.327 CC lib/util/pipe.o 00:02:27.327 CC lib/util/strerror_tls.o 00:02:27.327 CC lib/util/string.o 00:02:27.327 SO libspdk_vfio_user.so.5.0 00:02:27.327 CC lib/util/uuid.o 00:02:27.327 CC lib/util/xor.o 00:02:27.327 SYMLINK libspdk_vfio_user.so 00:02:27.327 CC lib/util/zipf.o 00:02:27.585 LIB libspdk_util.a 00:02:27.843 SO libspdk_util.so.10.0 00:02:27.843 LIB libspdk_trace_parser.a 00:02:28.100 SYMLINK libspdk_util.so 00:02:28.100 SO libspdk_trace_parser.so.5.0 00:02:28.100 SYMLINK libspdk_trace_parser.so 00:02:28.100 CC lib/conf/conf.o 00:02:28.100 CC lib/vmd/vmd.o 00:02:28.100 CC lib/idxd/idxd.o 00:02:28.100 CC lib/vmd/led.o 00:02:28.100 CC lib/rdma_utils/rdma_utils.o 00:02:28.100 CC lib/idxd/idxd_user.o 00:02:28.100 CC lib/json/json_parse.o 00:02:28.100 CC lib/idxd/idxd_kernel.o 00:02:28.100 CC lib/env_dpdk/env.o 00:02:28.100 CC lib/rdma_provider/common.o 00:02:28.358 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:28.358 CC lib/json/json_util.o 00:02:28.358 LIB libspdk_conf.a 00:02:28.358 CC lib/json/json_write.o 00:02:28.358 SO libspdk_conf.so.6.0 00:02:28.358 CC lib/env_dpdk/memory.o 00:02:28.358 CC lib/env_dpdk/pci.o 00:02:28.616 LIB libspdk_rdma_utils.a 00:02:28.616 SO libspdk_rdma_utils.so.1.0 00:02:28.616 SYMLINK libspdk_conf.so 00:02:28.616 LIB libspdk_rdma_provider.a 00:02:28.616 CC lib/env_dpdk/init.o 00:02:28.616 SO libspdk_rdma_provider.so.6.0 00:02:28.616 SYMLINK libspdk_rdma_utils.so 00:02:28.616 CC lib/env_dpdk/threads.o 00:02:28.616 CC lib/env_dpdk/pci_ioat.o 00:02:28.616 SYMLINK libspdk_rdma_provider.so 00:02:28.616 CC lib/env_dpdk/pci_virtio.o 00:02:28.874 CC lib/env_dpdk/pci_vmd.o 00:02:28.874 CC lib/env_dpdk/pci_idxd.o 00:02:28.874 CC lib/env_dpdk/pci_event.o 00:02:28.874 LIB libspdk_json.a 00:02:28.874 SO libspdk_json.so.6.0 00:02:28.874 LIB libspdk_idxd.a 00:02:28.874 CC lib/env_dpdk/sigbus_handler.o 00:02:28.874 CC lib/env_dpdk/pci_dpdk.o 00:02:28.874 SYMLINK libspdk_json.so 00:02:28.874 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:28.874 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:28.874 SO libspdk_idxd.so.12.0 00:02:29.132 LIB libspdk_vmd.a 00:02:29.132 SYMLINK libspdk_idxd.so 00:02:29.132 SO libspdk_vmd.so.6.0 00:02:29.132 SYMLINK libspdk_vmd.so 00:02:29.132 CC lib/jsonrpc/jsonrpc_server.o 00:02:29.132 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:29.132 CC lib/jsonrpc/jsonrpc_client.o 00:02:29.132 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:29.390 LIB libspdk_jsonrpc.a 00:02:29.390 SO libspdk_jsonrpc.so.6.0 00:02:29.648 SYMLINK libspdk_jsonrpc.so 00:02:29.906 CC lib/rpc/rpc.o 00:02:29.906 LIB libspdk_env_dpdk.a 00:02:30.165 LIB libspdk_rpc.a 00:02:30.165 SO libspdk_env_dpdk.so.15.0 00:02:30.165 SO libspdk_rpc.so.6.0 00:02:30.165 SYMLINK libspdk_rpc.so 00:02:30.165 SYMLINK libspdk_env_dpdk.so 00:02:30.424 CC lib/trace/trace.o 00:02:30.424 CC lib/trace/trace_flags.o 00:02:30.424 CC lib/trace/trace_rpc.o 00:02:30.424 CC lib/notify/notify.o 00:02:30.424 CC lib/notify/notify_rpc.o 00:02:30.424 CC lib/keyring/keyring.o 00:02:30.424 CC lib/keyring/keyring_rpc.o 00:02:30.683 LIB libspdk_notify.a 00:02:30.683 SO libspdk_notify.so.6.0 00:02:30.683 LIB libspdk_keyring.a 00:02:30.683 SO libspdk_keyring.so.1.0 00:02:30.683 SYMLINK libspdk_notify.so 00:02:30.683 LIB libspdk_trace.a 00:02:30.683 SYMLINK libspdk_keyring.so 00:02:30.970 SO libspdk_trace.so.10.0 00:02:30.970 SYMLINK libspdk_trace.so 00:02:31.227 CC lib/thread/thread.o 00:02:31.227 CC lib/thread/iobuf.o 00:02:31.227 CC lib/sock/sock.o 00:02:31.227 CC lib/sock/sock_rpc.o 00:02:31.791 LIB libspdk_sock.a 00:02:31.791 SO libspdk_sock.so.10.0 00:02:31.791 SYMLINK libspdk_sock.so 00:02:32.049 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:32.049 CC lib/nvme/nvme_ctrlr.o 00:02:32.049 CC lib/nvme/nvme_fabric.o 00:02:32.049 CC lib/nvme/nvme_ns_cmd.o 00:02:32.049 CC lib/nvme/nvme_ns.o 00:02:32.049 CC lib/nvme/nvme_pcie_common.o 00:02:32.049 CC lib/nvme/nvme_pcie.o 00:02:32.049 CC lib/nvme/nvme_qpair.o 00:02:32.049 CC lib/nvme/nvme.o 00:02:32.982 CC lib/nvme/nvme_quirks.o 00:02:32.982 CC lib/nvme/nvme_transport.o 00:02:32.982 CC lib/nvme/nvme_discovery.o 00:02:33.240 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:33.240 LIB libspdk_thread.a 00:02:33.240 SO libspdk_thread.so.10.1 00:02:33.240 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:33.240 CC lib/nvme/nvme_tcp.o 00:02:33.240 CC lib/nvme/nvme_opal.o 00:02:33.240 SYMLINK libspdk_thread.so 00:02:33.498 CC lib/accel/accel.o 00:02:33.498 CC lib/blob/blobstore.o 00:02:33.756 CC lib/blob/request.o 00:02:33.756 CC lib/init/json_config.o 00:02:33.756 CC lib/nvme/nvme_io_msg.o 00:02:33.756 CC lib/blob/zeroes.o 00:02:34.013 CC lib/blob/blob_bs_dev.o 00:02:34.013 CC lib/nvme/nvme_poll_group.o 00:02:34.013 CC lib/init/subsystem.o 00:02:34.013 CC lib/accel/accel_rpc.o 00:02:34.013 CC lib/nvme/nvme_zns.o 00:02:34.271 CC lib/nvme/nvme_stubs.o 00:02:34.271 CC lib/init/subsystem_rpc.o 00:02:34.271 CC lib/accel/accel_sw.o 00:02:34.271 CC lib/init/rpc.o 00:02:34.529 LIB libspdk_init.a 00:02:34.529 SO libspdk_init.so.5.0 00:02:34.529 CC lib/nvme/nvme_auth.o 00:02:34.529 CC lib/nvme/nvme_cuse.o 00:02:34.529 CC lib/virtio/virtio.o 00:02:34.529 SYMLINK libspdk_init.so 00:02:34.529 CC lib/nvme/nvme_rdma.o 00:02:34.787 LIB libspdk_accel.a 00:02:34.787 CC lib/virtio/virtio_vhost_user.o 00:02:34.787 SO libspdk_accel.so.16.0 00:02:34.787 CC lib/virtio/virtio_vfio_user.o 00:02:34.787 CC lib/event/app.o 00:02:35.045 SYMLINK libspdk_accel.so 00:02:35.045 CC lib/event/reactor.o 00:02:35.045 CC lib/virtio/virtio_pci.o 00:02:35.045 CC lib/event/log_rpc.o 00:02:35.303 CC lib/event/app_rpc.o 00:02:35.303 CC lib/bdev/bdev.o 00:02:35.303 CC lib/event/scheduler_static.o 00:02:35.303 LIB libspdk_virtio.a 00:02:35.561 SO libspdk_virtio.so.7.0 00:02:35.561 CC lib/bdev/bdev_rpc.o 00:02:35.561 CC lib/bdev/bdev_zone.o 00:02:35.561 CC lib/bdev/part.o 00:02:35.561 LIB libspdk_event.a 00:02:35.561 SYMLINK libspdk_virtio.so 00:02:35.561 CC lib/bdev/scsi_nvme.o 00:02:35.561 SO libspdk_event.so.14.0 00:02:35.561 SYMLINK libspdk_event.so 00:02:36.495 LIB libspdk_nvme.a 00:02:36.495 SO libspdk_nvme.so.13.1 00:02:37.062 SYMLINK libspdk_nvme.so 00:02:37.997 LIB libspdk_blob.a 00:02:37.997 SO libspdk_blob.so.11.0 00:02:37.997 SYMLINK libspdk_blob.so 00:02:38.255 CC lib/lvol/lvol.o 00:02:38.255 CC lib/blobfs/tree.o 00:02:38.255 CC lib/blobfs/blobfs.o 00:02:38.821 LIB libspdk_bdev.a 00:02:39.079 SO libspdk_bdev.so.16.0 00:02:39.079 SYMLINK libspdk_bdev.so 00:02:39.349 CC lib/ublk/ublk.o 00:02:39.349 CC lib/ublk/ublk_rpc.o 00:02:39.349 CC lib/nvmf/ctrlr.o 00:02:39.349 CC lib/nvmf/ctrlr_discovery.o 00:02:39.349 CC lib/nvmf/ctrlr_bdev.o 00:02:39.349 CC lib/ftl/ftl_core.o 00:02:39.349 CC lib/nbd/nbd.o 00:02:39.349 CC lib/scsi/dev.o 00:02:39.620 LIB libspdk_blobfs.a 00:02:39.620 LIB libspdk_lvol.a 00:02:39.620 SO libspdk_blobfs.so.10.0 00:02:39.620 CC lib/nvmf/subsystem.o 00:02:39.620 SO libspdk_lvol.so.10.0 00:02:39.620 SYMLINK libspdk_blobfs.so 00:02:39.620 CC lib/ftl/ftl_init.o 00:02:39.620 CC lib/scsi/lun.o 00:02:39.620 SYMLINK libspdk_lvol.so 00:02:39.620 CC lib/scsi/port.o 00:02:39.878 CC lib/ftl/ftl_layout.o 00:02:39.878 CC lib/nbd/nbd_rpc.o 00:02:39.878 CC lib/ftl/ftl_debug.o 00:02:39.878 CC lib/ftl/ftl_io.o 00:02:40.135 CC lib/nvmf/nvmf.o 00:02:40.135 CC lib/scsi/scsi.o 00:02:40.135 LIB libspdk_nbd.a 00:02:40.135 SO libspdk_nbd.so.7.0 00:02:40.135 LIB libspdk_ublk.a 00:02:40.135 CC lib/scsi/scsi_bdev.o 00:02:40.135 CC lib/ftl/ftl_sb.o 00:02:40.135 SYMLINK libspdk_nbd.so 00:02:40.135 CC lib/scsi/scsi_pr.o 00:02:40.135 CC lib/scsi/scsi_rpc.o 00:02:40.135 SO libspdk_ublk.so.3.0 00:02:40.135 CC lib/scsi/task.o 00:02:40.393 CC lib/ftl/ftl_l2p.o 00:02:40.393 SYMLINK libspdk_ublk.so 00:02:40.393 CC lib/ftl/ftl_l2p_flat.o 00:02:40.393 CC lib/nvmf/nvmf_rpc.o 00:02:40.393 CC lib/nvmf/transport.o 00:02:40.393 CC lib/nvmf/tcp.o 00:02:40.651 CC lib/ftl/ftl_nv_cache.o 00:02:40.651 CC lib/ftl/ftl_band.o 00:02:40.651 CC lib/ftl/ftl_band_ops.o 00:02:40.908 LIB libspdk_scsi.a 00:02:40.908 SO libspdk_scsi.so.9.0 00:02:41.165 CC lib/ftl/ftl_writer.o 00:02:41.165 SYMLINK libspdk_scsi.so 00:02:41.165 CC lib/ftl/ftl_rq.o 00:02:41.165 CC lib/ftl/ftl_reloc.o 00:02:41.165 CC lib/ftl/ftl_l2p_cache.o 00:02:41.165 CC lib/nvmf/stubs.o 00:02:41.165 CC lib/nvmf/mdns_server.o 00:02:41.422 CC lib/nvmf/rdma.o 00:02:41.422 CC lib/iscsi/conn.o 00:02:41.422 CC lib/iscsi/init_grp.o 00:02:41.681 CC lib/vhost/vhost.o 00:02:41.681 CC lib/vhost/vhost_rpc.o 00:02:41.681 CC lib/vhost/vhost_scsi.o 00:02:41.681 CC lib/nvmf/auth.o 00:02:41.939 CC lib/ftl/ftl_p2l.o 00:02:41.939 CC lib/iscsi/iscsi.o 00:02:41.939 CC lib/iscsi/md5.o 00:02:42.196 CC lib/vhost/vhost_blk.o 00:02:42.454 CC lib/ftl/mngt/ftl_mngt.o 00:02:42.454 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:42.454 CC lib/vhost/rte_vhost_user.o 00:02:42.454 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:42.712 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:42.712 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:42.712 CC lib/iscsi/param.o 00:02:42.712 CC lib/iscsi/portal_grp.o 00:02:42.712 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:42.970 CC lib/iscsi/tgt_node.o 00:02:42.970 CC lib/iscsi/iscsi_subsystem.o 00:02:43.227 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:43.227 CC lib/iscsi/iscsi_rpc.o 00:02:43.227 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:43.227 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:43.227 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:43.485 CC lib/iscsi/task.o 00:02:43.485 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:43.485 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:43.485 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:43.485 CC lib/ftl/utils/ftl_conf.o 00:02:43.742 CC lib/ftl/utils/ftl_md.o 00:02:43.742 CC lib/ftl/utils/ftl_mempool.o 00:02:43.742 CC lib/ftl/utils/ftl_bitmap.o 00:02:43.742 CC lib/ftl/utils/ftl_property.o 00:02:43.742 LIB libspdk_vhost.a 00:02:43.742 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:43.742 LIB libspdk_iscsi.a 00:02:43.742 SO libspdk_vhost.so.8.0 00:02:43.742 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:44.001 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:44.001 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:44.001 SO libspdk_iscsi.so.8.0 00:02:44.001 SYMLINK libspdk_vhost.so 00:02:44.001 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:44.001 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:44.001 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:44.259 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:44.259 SYMLINK libspdk_iscsi.so 00:02:44.259 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:44.259 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:44.259 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:44.259 CC lib/ftl/base/ftl_base_dev.o 00:02:44.259 CC lib/ftl/base/ftl_base_bdev.o 00:02:44.259 CC lib/ftl/ftl_trace.o 00:02:44.516 LIB libspdk_nvmf.a 00:02:44.516 LIB libspdk_ftl.a 00:02:44.774 SO libspdk_nvmf.so.19.0 00:02:44.774 SO libspdk_ftl.so.9.0 00:02:45.031 SYMLINK libspdk_nvmf.so 00:02:45.289 SYMLINK libspdk_ftl.so 00:02:45.546 CC module/env_dpdk/env_dpdk_rpc.o 00:02:45.546 CC module/accel/dsa/accel_dsa.o 00:02:45.547 CC module/accel/ioat/accel_ioat.o 00:02:45.547 CC module/accel/error/accel_error.o 00:02:45.547 CC module/accel/iaa/accel_iaa.o 00:02:45.547 CC module/sock/posix/posix.o 00:02:45.547 CC module/blob/bdev/blob_bdev.o 00:02:45.547 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:45.804 CC module/keyring/file/keyring.o 00:02:45.804 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:45.804 LIB libspdk_env_dpdk_rpc.a 00:02:45.804 SO libspdk_env_dpdk_rpc.so.6.0 00:02:45.804 SYMLINK libspdk_env_dpdk_rpc.so 00:02:45.804 CC module/keyring/file/keyring_rpc.o 00:02:45.804 LIB libspdk_scheduler_dpdk_governor.a 00:02:45.804 CC module/accel/error/accel_error_rpc.o 00:02:45.804 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:45.804 CC module/accel/ioat/accel_ioat_rpc.o 00:02:45.804 LIB libspdk_scheduler_dynamic.a 00:02:45.804 CC module/accel/iaa/accel_iaa_rpc.o 00:02:46.061 SO libspdk_scheduler_dynamic.so.4.0 00:02:46.061 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:46.061 CC module/accel/dsa/accel_dsa_rpc.o 00:02:46.061 LIB libspdk_blob_bdev.a 00:02:46.061 LIB libspdk_keyring_file.a 00:02:46.061 LIB libspdk_accel_ioat.a 00:02:46.061 SYMLINK libspdk_scheduler_dynamic.so 00:02:46.061 CC module/scheduler/gscheduler/gscheduler.o 00:02:46.061 SO libspdk_blob_bdev.so.11.0 00:02:46.061 SO libspdk_keyring_file.so.1.0 00:02:46.061 LIB libspdk_accel_error.a 00:02:46.061 SO libspdk_accel_ioat.so.6.0 00:02:46.061 LIB libspdk_accel_iaa.a 00:02:46.061 SYMLINK libspdk_blob_bdev.so 00:02:46.061 SO libspdk_accel_error.so.2.0 00:02:46.061 SO libspdk_accel_iaa.so.3.0 00:02:46.061 SYMLINK libspdk_keyring_file.so 00:02:46.061 SYMLINK libspdk_accel_ioat.so 00:02:46.061 SYMLINK libspdk_accel_error.so 00:02:46.318 SYMLINK libspdk_accel_iaa.so 00:02:46.318 LIB libspdk_accel_dsa.a 00:02:46.318 CC module/keyring/linux/keyring.o 00:02:46.318 SO libspdk_accel_dsa.so.5.0 00:02:46.318 LIB libspdk_scheduler_gscheduler.a 00:02:46.318 SO libspdk_scheduler_gscheduler.so.4.0 00:02:46.318 SYMLINK libspdk_accel_dsa.so 00:02:46.318 CC module/bdev/error/vbdev_error.o 00:02:46.318 CC module/bdev/malloc/bdev_malloc.o 00:02:46.318 SYMLINK libspdk_scheduler_gscheduler.so 00:02:46.318 CC module/bdev/delay/vbdev_delay.o 00:02:46.318 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:46.318 CC module/keyring/linux/keyring_rpc.o 00:02:46.318 CC module/blobfs/bdev/blobfs_bdev.o 00:02:46.318 CC module/bdev/lvol/vbdev_lvol.o 00:02:46.318 CC module/bdev/gpt/gpt.o 00:02:46.575 LIB libspdk_keyring_linux.a 00:02:46.575 CC module/bdev/null/bdev_null.o 00:02:46.575 SO libspdk_keyring_linux.so.1.0 00:02:46.575 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:46.575 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:46.575 LIB libspdk_sock_posix.a 00:02:46.575 CC module/bdev/gpt/vbdev_gpt.o 00:02:46.575 SYMLINK libspdk_keyring_linux.so 00:02:46.575 CC module/bdev/null/bdev_null_rpc.o 00:02:46.575 SO libspdk_sock_posix.so.6.0 00:02:46.833 CC module/bdev/error/vbdev_error_rpc.o 00:02:46.833 SYMLINK libspdk_sock_posix.so 00:02:46.833 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:46.833 LIB libspdk_blobfs_bdev.a 00:02:46.833 SO libspdk_blobfs_bdev.so.6.0 00:02:46.833 LIB libspdk_bdev_delay.a 00:02:46.833 LIB libspdk_bdev_null.a 00:02:46.833 LIB libspdk_bdev_error.a 00:02:46.833 SYMLINK libspdk_blobfs_bdev.so 00:02:46.833 SO libspdk_bdev_delay.so.6.0 00:02:46.833 SO libspdk_bdev_null.so.6.0 00:02:46.833 SO libspdk_bdev_error.so.6.0 00:02:47.091 LIB libspdk_bdev_malloc.a 00:02:47.091 LIB libspdk_bdev_gpt.a 00:02:47.091 SYMLINK libspdk_bdev_delay.so 00:02:47.091 SYMLINK libspdk_bdev_error.so 00:02:47.091 SO libspdk_bdev_malloc.so.6.0 00:02:47.091 SYMLINK libspdk_bdev_null.so 00:02:47.091 SO libspdk_bdev_gpt.so.6.0 00:02:47.091 CC module/bdev/passthru/vbdev_passthru.o 00:02:47.091 CC module/bdev/nvme/bdev_nvme.o 00:02:47.091 LIB libspdk_bdev_lvol.a 00:02:47.091 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:47.091 CC module/bdev/raid/bdev_raid.o 00:02:47.091 SYMLINK libspdk_bdev_malloc.so 00:02:47.091 SYMLINK libspdk_bdev_gpt.so 00:02:47.091 SO libspdk_bdev_lvol.so.6.0 00:02:47.091 CC module/bdev/xnvme/bdev_xnvme.o 00:02:47.091 CC module/bdev/split/vbdev_split.o 00:02:47.091 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:47.349 SYMLINK libspdk_bdev_lvol.so 00:02:47.349 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:47.349 CC module/bdev/raid/bdev_raid_rpc.o 00:02:47.349 CC module/bdev/aio/bdev_aio.o 00:02:47.349 CC module/bdev/ftl/bdev_ftl.o 00:02:47.349 CC module/bdev/raid/bdev_raid_sb.o 00:02:47.607 CC module/bdev/split/vbdev_split_rpc.o 00:02:47.607 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:02:47.607 LIB libspdk_bdev_passthru.a 00:02:47.607 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:47.607 SO libspdk_bdev_passthru.so.6.0 00:02:47.607 SYMLINK libspdk_bdev_passthru.so 00:02:47.607 CC module/bdev/raid/raid0.o 00:02:47.607 LIB libspdk_bdev_split.a 00:02:47.607 LIB libspdk_bdev_zone_block.a 00:02:47.607 SO libspdk_bdev_split.so.6.0 00:02:47.607 LIB libspdk_bdev_xnvme.a 00:02:47.607 SO libspdk_bdev_zone_block.so.6.0 00:02:47.866 CC module/bdev/aio/bdev_aio_rpc.o 00:02:47.866 SO libspdk_bdev_xnvme.so.3.0 00:02:47.866 LIB libspdk_bdev_ftl.a 00:02:47.866 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:47.866 SYMLINK libspdk_bdev_split.so 00:02:47.866 CC module/bdev/nvme/nvme_rpc.o 00:02:47.866 SYMLINK libspdk_bdev_zone_block.so 00:02:47.866 CC module/bdev/nvme/bdev_mdns_client.o 00:02:47.866 CC module/bdev/iscsi/bdev_iscsi.o 00:02:47.866 SO libspdk_bdev_ftl.so.6.0 00:02:47.866 SYMLINK libspdk_bdev_xnvme.so 00:02:47.866 CC module/bdev/nvme/vbdev_opal.o 00:02:47.866 SYMLINK libspdk_bdev_ftl.so 00:02:47.866 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:47.866 LIB libspdk_bdev_aio.a 00:02:47.866 SO libspdk_bdev_aio.so.6.0 00:02:47.866 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:47.866 CC module/bdev/raid/raid1.o 00:02:48.123 SYMLINK libspdk_bdev_aio.so 00:02:48.123 CC module/bdev/raid/concat.o 00:02:48.123 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:48.123 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:48.123 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:48.123 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:48.381 LIB libspdk_bdev_iscsi.a 00:02:48.381 SO libspdk_bdev_iscsi.so.6.0 00:02:48.381 SYMLINK libspdk_bdev_iscsi.so 00:02:48.638 LIB libspdk_bdev_raid.a 00:02:48.638 SO libspdk_bdev_raid.so.6.0 00:02:48.638 SYMLINK libspdk_bdev_raid.so 00:02:48.896 LIB libspdk_bdev_virtio.a 00:02:48.896 SO libspdk_bdev_virtio.so.6.0 00:02:49.154 SYMLINK libspdk_bdev_virtio.so 00:02:50.088 LIB libspdk_bdev_nvme.a 00:02:50.346 SO libspdk_bdev_nvme.so.7.0 00:02:50.346 SYMLINK libspdk_bdev_nvme.so 00:02:50.912 CC module/event/subsystems/sock/sock.o 00:02:50.912 CC module/event/subsystems/vmd/vmd.o 00:02:50.912 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:50.912 CC module/event/subsystems/iobuf/iobuf.o 00:02:50.912 CC module/event/subsystems/keyring/keyring.o 00:02:50.912 CC module/event/subsystems/scheduler/scheduler.o 00:02:50.912 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:50.912 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:51.170 LIB libspdk_event_vhost_blk.a 00:02:51.170 LIB libspdk_event_scheduler.a 00:02:51.170 LIB libspdk_event_keyring.a 00:02:51.170 LIB libspdk_event_sock.a 00:02:51.170 LIB libspdk_event_vmd.a 00:02:51.170 LIB libspdk_event_iobuf.a 00:02:51.170 SO libspdk_event_vhost_blk.so.3.0 00:02:51.170 SO libspdk_event_scheduler.so.4.0 00:02:51.170 SO libspdk_event_keyring.so.1.0 00:02:51.170 SO libspdk_event_sock.so.5.0 00:02:51.170 SO libspdk_event_vmd.so.6.0 00:02:51.170 SO libspdk_event_iobuf.so.3.0 00:02:51.170 SYMLINK libspdk_event_vhost_blk.so 00:02:51.170 SYMLINK libspdk_event_scheduler.so 00:02:51.170 SYMLINK libspdk_event_sock.so 00:02:51.170 SYMLINK libspdk_event_keyring.so 00:02:51.170 SYMLINK libspdk_event_vmd.so 00:02:51.170 SYMLINK libspdk_event_iobuf.so 00:02:51.427 CC module/event/subsystems/accel/accel.o 00:02:51.684 LIB libspdk_event_accel.a 00:02:51.684 SO libspdk_event_accel.so.6.0 00:02:51.684 SYMLINK libspdk_event_accel.so 00:02:51.941 CC module/event/subsystems/bdev/bdev.o 00:02:52.198 LIB libspdk_event_bdev.a 00:02:52.198 SO libspdk_event_bdev.so.6.0 00:02:52.456 SYMLINK libspdk_event_bdev.so 00:02:52.456 CC module/event/subsystems/nbd/nbd.o 00:02:52.456 CC module/event/subsystems/ublk/ublk.o 00:02:52.456 CC module/event/subsystems/scsi/scsi.o 00:02:52.456 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:52.456 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:52.713 LIB libspdk_event_nbd.a 00:02:52.713 LIB libspdk_event_ublk.a 00:02:52.713 SO libspdk_event_nbd.so.6.0 00:02:52.713 LIB libspdk_event_scsi.a 00:02:52.713 SO libspdk_event_ublk.so.3.0 00:02:52.713 SO libspdk_event_scsi.so.6.0 00:02:52.713 SYMLINK libspdk_event_nbd.so 00:02:52.713 SYMLINK libspdk_event_ublk.so 00:02:52.971 LIB libspdk_event_nvmf.a 00:02:52.971 SYMLINK libspdk_event_scsi.so 00:02:52.971 SO libspdk_event_nvmf.so.6.0 00:02:52.971 SYMLINK libspdk_event_nvmf.so 00:02:53.229 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:53.229 CC module/event/subsystems/iscsi/iscsi.o 00:02:53.229 LIB libspdk_event_vhost_scsi.a 00:02:53.229 SO libspdk_event_vhost_scsi.so.3.0 00:02:53.229 LIB libspdk_event_iscsi.a 00:02:53.487 SO libspdk_event_iscsi.so.6.0 00:02:53.487 SYMLINK libspdk_event_vhost_scsi.so 00:02:53.487 SYMLINK libspdk_event_iscsi.so 00:02:53.487 SO libspdk.so.6.0 00:02:53.487 SYMLINK libspdk.so 00:02:53.745 CC app/trace_record/trace_record.o 00:02:53.745 CC app/spdk_nvme_identify/identify.o 00:02:53.745 CC app/spdk_nvme_perf/perf.o 00:02:53.745 CC app/spdk_lspci/spdk_lspci.o 00:02:53.745 CXX app/trace/trace.o 00:02:54.003 CC app/nvmf_tgt/nvmf_main.o 00:02:54.003 CC app/iscsi_tgt/iscsi_tgt.o 00:02:54.003 CC app/spdk_tgt/spdk_tgt.o 00:02:54.003 CC examples/util/zipf/zipf.o 00:02:54.003 CC test/thread/poller_perf/poller_perf.o 00:02:54.003 LINK spdk_lspci 00:02:54.262 LINK nvmf_tgt 00:02:54.262 LINK poller_perf 00:02:54.262 LINK spdk_trace_record 00:02:54.262 LINK iscsi_tgt 00:02:54.262 LINK spdk_tgt 00:02:54.262 LINK zipf 00:02:54.262 LINK spdk_trace 00:02:54.519 CC app/spdk_nvme_discover/discovery_aer.o 00:02:54.519 CC app/spdk_top/spdk_top.o 00:02:54.519 CC app/spdk_dd/spdk_dd.o 00:02:54.519 CC examples/ioat/perf/perf.o 00:02:54.519 CC test/dma/test_dma/test_dma.o 00:02:54.519 LINK spdk_nvme_discover 00:02:54.776 CC app/fio/nvme/fio_plugin.o 00:02:54.776 CC examples/ioat/verify/verify.o 00:02:54.776 CC test/app/bdev_svc/bdev_svc.o 00:02:54.776 LINK ioat_perf 00:02:55.034 LINK verify 00:02:55.034 CC app/vhost/vhost.o 00:02:55.034 LINK bdev_svc 00:02:55.034 LINK spdk_nvme_perf 00:02:55.034 LINK spdk_dd 00:02:55.034 LINK spdk_nvme_identify 00:02:55.034 LINK test_dma 00:02:55.034 CC app/fio/bdev/fio_plugin.o 00:02:55.292 LINK vhost 00:02:55.292 CC examples/vmd/lsvmd/lsvmd.o 00:02:55.292 LINK spdk_nvme 00:02:55.292 CC test/app/histogram_perf/histogram_perf.o 00:02:55.292 CC examples/idxd/perf/perf.o 00:02:55.292 TEST_HEADER include/spdk/accel.h 00:02:55.292 TEST_HEADER include/spdk/accel_module.h 00:02:55.292 TEST_HEADER include/spdk/assert.h 00:02:55.292 TEST_HEADER include/spdk/barrier.h 00:02:55.292 TEST_HEADER include/spdk/base64.h 00:02:55.292 TEST_HEADER include/spdk/bdev.h 00:02:55.292 TEST_HEADER include/spdk/bdev_module.h 00:02:55.292 TEST_HEADER include/spdk/bdev_zone.h 00:02:55.551 TEST_HEADER include/spdk/bit_array.h 00:02:55.551 TEST_HEADER include/spdk/bit_pool.h 00:02:55.551 TEST_HEADER include/spdk/blob_bdev.h 00:02:55.551 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:55.551 TEST_HEADER include/spdk/blobfs.h 00:02:55.551 TEST_HEADER include/spdk/blob.h 00:02:55.551 TEST_HEADER include/spdk/conf.h 00:02:55.551 TEST_HEADER include/spdk/config.h 00:02:55.551 TEST_HEADER include/spdk/cpuset.h 00:02:55.551 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:55.551 TEST_HEADER include/spdk/crc16.h 00:02:55.551 TEST_HEADER include/spdk/crc32.h 00:02:55.551 TEST_HEADER include/spdk/crc64.h 00:02:55.551 TEST_HEADER include/spdk/dif.h 00:02:55.551 TEST_HEADER include/spdk/dma.h 00:02:55.551 TEST_HEADER include/spdk/endian.h 00:02:55.552 TEST_HEADER include/spdk/env_dpdk.h 00:02:55.552 TEST_HEADER include/spdk/env.h 00:02:55.552 TEST_HEADER include/spdk/event.h 00:02:55.552 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:55.552 TEST_HEADER include/spdk/fd_group.h 00:02:55.552 TEST_HEADER include/spdk/fd.h 00:02:55.552 TEST_HEADER include/spdk/file.h 00:02:55.552 TEST_HEADER include/spdk/ftl.h 00:02:55.552 TEST_HEADER include/spdk/gpt_spec.h 00:02:55.552 TEST_HEADER include/spdk/hexlify.h 00:02:55.552 TEST_HEADER include/spdk/histogram_data.h 00:02:55.552 TEST_HEADER include/spdk/idxd.h 00:02:55.552 TEST_HEADER include/spdk/idxd_spec.h 00:02:55.552 TEST_HEADER include/spdk/init.h 00:02:55.552 TEST_HEADER include/spdk/ioat.h 00:02:55.552 TEST_HEADER include/spdk/ioat_spec.h 00:02:55.552 TEST_HEADER include/spdk/iscsi_spec.h 00:02:55.552 TEST_HEADER include/spdk/json.h 00:02:55.552 TEST_HEADER include/spdk/jsonrpc.h 00:02:55.552 TEST_HEADER include/spdk/keyring.h 00:02:55.552 LINK lsvmd 00:02:55.552 TEST_HEADER include/spdk/keyring_module.h 00:02:55.552 TEST_HEADER include/spdk/likely.h 00:02:55.552 TEST_HEADER include/spdk/log.h 00:02:55.552 TEST_HEADER include/spdk/lvol.h 00:02:55.552 TEST_HEADER include/spdk/memory.h 00:02:55.552 TEST_HEADER include/spdk/mmio.h 00:02:55.552 TEST_HEADER include/spdk/nbd.h 00:02:55.552 TEST_HEADER include/spdk/net.h 00:02:55.552 TEST_HEADER include/spdk/notify.h 00:02:55.552 TEST_HEADER include/spdk/nvme.h 00:02:55.552 TEST_HEADER include/spdk/nvme_intel.h 00:02:55.552 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:55.552 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:55.552 TEST_HEADER include/spdk/nvme_spec.h 00:02:55.552 TEST_HEADER include/spdk/nvme_zns.h 00:02:55.552 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:55.552 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:55.552 TEST_HEADER include/spdk/nvmf.h 00:02:55.552 LINK histogram_perf 00:02:55.552 TEST_HEADER include/spdk/nvmf_spec.h 00:02:55.552 TEST_HEADER include/spdk/nvmf_transport.h 00:02:55.552 TEST_HEADER include/spdk/opal.h 00:02:55.552 TEST_HEADER include/spdk/opal_spec.h 00:02:55.552 TEST_HEADER include/spdk/pci_ids.h 00:02:55.552 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:55.552 TEST_HEADER include/spdk/pipe.h 00:02:55.552 TEST_HEADER include/spdk/queue.h 00:02:55.552 TEST_HEADER include/spdk/reduce.h 00:02:55.552 TEST_HEADER include/spdk/rpc.h 00:02:55.552 TEST_HEADER include/spdk/scheduler.h 00:02:55.552 TEST_HEADER include/spdk/scsi.h 00:02:55.552 TEST_HEADER include/spdk/scsi_spec.h 00:02:55.552 TEST_HEADER include/spdk/sock.h 00:02:55.552 TEST_HEADER include/spdk/stdinc.h 00:02:55.552 TEST_HEADER include/spdk/string.h 00:02:55.552 TEST_HEADER include/spdk/thread.h 00:02:55.552 TEST_HEADER include/spdk/trace.h 00:02:55.552 TEST_HEADER include/spdk/trace_parser.h 00:02:55.552 TEST_HEADER include/spdk/tree.h 00:02:55.552 TEST_HEADER include/spdk/ublk.h 00:02:55.552 TEST_HEADER include/spdk/util.h 00:02:55.552 TEST_HEADER include/spdk/uuid.h 00:02:55.552 TEST_HEADER include/spdk/version.h 00:02:55.552 LINK spdk_top 00:02:55.552 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:55.552 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:55.552 TEST_HEADER include/spdk/vhost.h 00:02:55.552 TEST_HEADER include/spdk/vmd.h 00:02:55.552 TEST_HEADER include/spdk/xor.h 00:02:55.552 TEST_HEADER include/spdk/zipf.h 00:02:55.552 CXX test/cpp_headers/accel.o 00:02:55.810 LINK interrupt_tgt 00:02:55.810 CC examples/thread/thread/thread_ex.o 00:02:55.810 LINK spdk_bdev 00:02:55.810 CC examples/vmd/led/led.o 00:02:55.810 LINK idxd_perf 00:02:55.810 CC test/app/jsoncat/jsoncat.o 00:02:55.810 CXX test/cpp_headers/accel_module.o 00:02:55.810 CC test/app/stub/stub.o 00:02:56.067 LINK nvme_fuzz 00:02:56.067 LINK led 00:02:56.067 LINK jsoncat 00:02:56.067 CXX test/cpp_headers/assert.o 00:02:56.067 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:56.067 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:56.067 LINK thread 00:02:56.067 CC examples/sock/hello_world/hello_sock.o 00:02:56.067 LINK stub 00:02:56.067 CXX test/cpp_headers/barrier.o 00:02:56.067 CXX test/cpp_headers/base64.o 00:02:56.067 CXX test/cpp_headers/bdev.o 00:02:56.325 CC test/event/event_perf/event_perf.o 00:02:56.325 CXX test/cpp_headers/bdev_module.o 00:02:56.325 CC test/env/vtophys/vtophys.o 00:02:56.325 CC test/event/reactor_perf/reactor_perf.o 00:02:56.325 CC test/event/reactor/reactor.o 00:02:56.325 CC test/env/mem_callbacks/mem_callbacks.o 00:02:56.325 LINK hello_sock 00:02:56.583 LINK event_perf 00:02:56.583 CC test/nvme/aer/aer.o 00:02:56.583 LINK vhost_fuzz 00:02:56.583 LINK vtophys 00:02:56.583 LINK reactor_perf 00:02:56.583 LINK reactor 00:02:56.583 CXX test/cpp_headers/bdev_zone.o 00:02:56.840 CXX test/cpp_headers/bit_array.o 00:02:56.840 CXX test/cpp_headers/bit_pool.o 00:02:56.840 CC examples/accel/perf/accel_perf.o 00:02:56.840 CC test/event/app_repeat/app_repeat.o 00:02:56.840 LINK aer 00:02:56.840 CXX test/cpp_headers/blob_bdev.o 00:02:56.840 CC test/nvme/reset/reset.o 00:02:57.098 CC examples/nvme/hello_world/hello_world.o 00:02:57.098 CC examples/blob/hello_world/hello_blob.o 00:02:57.098 LINK app_repeat 00:02:57.098 CC test/nvme/sgl/sgl.o 00:02:57.098 CXX test/cpp_headers/blobfs_bdev.o 00:02:57.098 LINK mem_callbacks 00:02:57.356 LINK hello_world 00:02:57.356 LINK hello_blob 00:02:57.356 CC examples/nvme/reconnect/reconnect.o 00:02:57.356 LINK reset 00:02:57.356 CXX test/cpp_headers/blobfs.o 00:02:57.356 CC test/event/scheduler/scheduler.o 00:02:57.356 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:57.356 LINK sgl 00:02:57.356 LINK accel_perf 00:02:57.356 CXX test/cpp_headers/blob.o 00:02:57.615 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:57.615 LINK env_dpdk_post_init 00:02:57.615 CC examples/nvme/arbitration/arbitration.o 00:02:57.615 CC examples/blob/cli/blobcli.o 00:02:57.615 LINK scheduler 00:02:57.615 CXX test/cpp_headers/conf.o 00:02:57.615 CC test/nvme/e2edp/nvme_dp.o 00:02:57.615 LINK reconnect 00:02:57.873 CC examples/nvme/hotplug/hotplug.o 00:02:57.873 CXX test/cpp_headers/config.o 00:02:57.873 LINK iscsi_fuzz 00:02:57.873 CC test/env/memory/memory_ut.o 00:02:57.873 CXX test/cpp_headers/cpuset.o 00:02:57.873 CXX test/cpp_headers/crc16.o 00:02:57.873 LINK arbitration 00:02:57.873 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:57.873 LINK nvme_dp 00:02:58.131 LINK hotplug 00:02:58.131 CXX test/cpp_headers/crc32.o 00:02:58.131 CXX test/cpp_headers/crc64.o 00:02:58.131 CC examples/nvme/abort/abort.o 00:02:58.131 LINK nvme_manage 00:02:58.131 LINK cmb_copy 00:02:58.131 CXX test/cpp_headers/dif.o 00:02:58.131 LINK blobcli 00:02:58.131 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:58.131 CC test/nvme/overhead/overhead.o 00:02:58.389 CXX test/cpp_headers/dma.o 00:02:58.389 CXX test/cpp_headers/endian.o 00:02:58.389 CC examples/bdev/hello_world/hello_bdev.o 00:02:58.389 LINK pmr_persistence 00:02:58.389 CC test/nvme/err_injection/err_injection.o 00:02:58.389 CC examples/bdev/bdevperf/bdevperf.o 00:02:58.647 CC test/env/pci/pci_ut.o 00:02:58.647 CC test/nvme/startup/startup.o 00:02:58.647 LINK abort 00:02:58.647 CXX test/cpp_headers/env_dpdk.o 00:02:58.647 LINK overhead 00:02:58.647 LINK err_injection 00:02:58.647 LINK hello_bdev 00:02:58.647 CC test/nvme/reserve/reserve.o 00:02:58.647 CXX test/cpp_headers/env.o 00:02:58.647 LINK startup 00:02:58.904 CC test/nvme/simple_copy/simple_copy.o 00:02:58.905 CC test/rpc_client/rpc_client_test.o 00:02:58.905 CXX test/cpp_headers/event.o 00:02:58.905 CXX test/cpp_headers/fd_group.o 00:02:58.905 CC test/nvme/connect_stress/connect_stress.o 00:02:58.905 LINK reserve 00:02:59.163 CC test/nvme/boot_partition/boot_partition.o 00:02:59.163 LINK pci_ut 00:02:59.163 LINK rpc_client_test 00:02:59.163 CXX test/cpp_headers/fd.o 00:02:59.163 LINK simple_copy 00:02:59.163 LINK memory_ut 00:02:59.163 LINK connect_stress 00:02:59.163 LINK boot_partition 00:02:59.420 CC test/nvme/compliance/nvme_compliance.o 00:02:59.420 CXX test/cpp_headers/file.o 00:02:59.420 CC test/accel/dif/dif.o 00:02:59.420 CC test/nvme/fused_ordering/fused_ordering.o 00:02:59.420 LINK bdevperf 00:02:59.420 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:59.420 CC test/nvme/cuse/cuse.o 00:02:59.420 CC test/nvme/fdp/fdp.o 00:02:59.420 CXX test/cpp_headers/ftl.o 00:02:59.677 LINK fused_ordering 00:02:59.677 CC test/blobfs/mkfs/mkfs.o 00:02:59.677 LINK doorbell_aers 00:02:59.677 CXX test/cpp_headers/gpt_spec.o 00:02:59.677 CC test/lvol/esnap/esnap.o 00:02:59.677 LINK nvme_compliance 00:02:59.935 CXX test/cpp_headers/hexlify.o 00:02:59.935 LINK mkfs 00:02:59.935 CXX test/cpp_headers/histogram_data.o 00:02:59.935 CXX test/cpp_headers/idxd.o 00:02:59.935 LINK fdp 00:02:59.935 CC examples/nvmf/nvmf/nvmf.o 00:02:59.935 LINK dif 00:02:59.935 CXX test/cpp_headers/idxd_spec.o 00:02:59.935 CXX test/cpp_headers/init.o 00:02:59.935 CXX test/cpp_headers/ioat.o 00:03:00.193 CXX test/cpp_headers/ioat_spec.o 00:03:00.193 CXX test/cpp_headers/iscsi_spec.o 00:03:00.193 CXX test/cpp_headers/json.o 00:03:00.193 CXX test/cpp_headers/jsonrpc.o 00:03:00.193 CXX test/cpp_headers/keyring.o 00:03:00.193 CXX test/cpp_headers/keyring_module.o 00:03:00.193 CXX test/cpp_headers/likely.o 00:03:00.193 CXX test/cpp_headers/log.o 00:03:00.193 CXX test/cpp_headers/lvol.o 00:03:00.193 LINK nvmf 00:03:00.193 CXX test/cpp_headers/memory.o 00:03:00.451 CC test/bdev/bdevio/bdevio.o 00:03:00.451 CXX test/cpp_headers/mmio.o 00:03:00.451 CXX test/cpp_headers/nbd.o 00:03:00.451 CXX test/cpp_headers/net.o 00:03:00.451 CXX test/cpp_headers/notify.o 00:03:00.451 CXX test/cpp_headers/nvme_intel.o 00:03:00.451 CXX test/cpp_headers/nvme.o 00:03:00.451 CXX test/cpp_headers/nvme_ocssd.o 00:03:00.451 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:00.451 CXX test/cpp_headers/nvme_spec.o 00:03:00.710 CXX test/cpp_headers/nvme_zns.o 00:03:00.710 CXX test/cpp_headers/nvmf_cmd.o 00:03:00.710 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:00.710 CXX test/cpp_headers/nvmf.o 00:03:00.710 CXX test/cpp_headers/nvmf_spec.o 00:03:00.710 CXX test/cpp_headers/nvmf_transport.o 00:03:00.710 CXX test/cpp_headers/opal.o 00:03:00.968 CXX test/cpp_headers/opal_spec.o 00:03:00.968 LINK bdevio 00:03:00.968 CXX test/cpp_headers/pci_ids.o 00:03:00.968 CXX test/cpp_headers/pipe.o 00:03:00.968 CXX test/cpp_headers/queue.o 00:03:00.968 CXX test/cpp_headers/reduce.o 00:03:00.968 CXX test/cpp_headers/rpc.o 00:03:00.968 CXX test/cpp_headers/scheduler.o 00:03:00.968 CXX test/cpp_headers/scsi.o 00:03:00.968 CXX test/cpp_headers/scsi_spec.o 00:03:00.968 CXX test/cpp_headers/sock.o 00:03:00.968 CXX test/cpp_headers/stdinc.o 00:03:01.225 LINK cuse 00:03:01.225 CXX test/cpp_headers/string.o 00:03:01.225 CXX test/cpp_headers/thread.o 00:03:01.225 CXX test/cpp_headers/trace.o 00:03:01.225 CXX test/cpp_headers/trace_parser.o 00:03:01.226 CXX test/cpp_headers/tree.o 00:03:01.226 CXX test/cpp_headers/ublk.o 00:03:01.226 CXX test/cpp_headers/util.o 00:03:01.226 CXX test/cpp_headers/uuid.o 00:03:01.226 CXX test/cpp_headers/version.o 00:03:01.226 CXX test/cpp_headers/vfio_user_pci.o 00:03:01.226 CXX test/cpp_headers/vfio_user_spec.o 00:03:01.226 CXX test/cpp_headers/vhost.o 00:03:01.226 CXX test/cpp_headers/vmd.o 00:03:01.226 CXX test/cpp_headers/xor.o 00:03:01.226 CXX test/cpp_headers/zipf.o 00:03:07.807 LINK esnap 00:03:07.807 00:03:07.807 real 1m19.109s 00:03:07.807 user 7m37.249s 00:03:07.807 sys 1m33.516s 00:03:07.807 12:58:59 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:07.807 12:58:59 make -- common/autotest_common.sh@10 -- $ set +x 00:03:07.807 ************************************ 00:03:07.807 END TEST make 00:03:07.807 ************************************ 00:03:07.807 12:58:59 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:07.807 12:58:59 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:07.807 12:58:59 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:07.807 12:58:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.807 12:58:59 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:07.807 12:58:59 -- pm/common@44 -- $ pid=5227 00:03:07.807 12:58:59 -- pm/common@50 -- $ kill -TERM 5227 00:03:07.807 12:58:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.807 12:58:59 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:07.807 12:58:59 -- pm/common@44 -- $ pid=5229 00:03:07.807 12:58:59 -- pm/common@50 -- $ kill -TERM 5229 00:03:07.807 12:58:59 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:07.807 12:58:59 -- nvmf/common.sh@7 -- # uname -s 00:03:07.807 12:58:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:07.807 12:58:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:07.807 12:58:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:07.807 12:58:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:07.808 12:58:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:07.808 12:58:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:07.808 12:58:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:07.808 12:58:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:07.808 12:58:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:07.808 12:58:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:07.808 12:58:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81f3884e-77f2-48f6-93b2-e58369b5121e 00:03:07.808 12:58:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=81f3884e-77f2-48f6-93b2-e58369b5121e 00:03:07.808 12:58:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:07.808 12:58:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:07.808 12:58:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:07.808 12:58:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:07.808 12:58:59 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:07.808 12:58:59 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:07.808 12:58:59 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:07.808 12:58:59 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:07.808 12:58:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:07.808 12:58:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:07.808 12:58:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:07.808 12:58:59 -- paths/export.sh@5 -- # export PATH 00:03:07.808 12:58:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:07.808 12:58:59 -- nvmf/common.sh@47 -- # : 0 00:03:07.808 12:58:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:07.808 12:58:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:07.808 12:58:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:07.808 12:58:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:07.808 12:58:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:07.808 12:58:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:07.808 12:58:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:07.808 12:58:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:07.808 12:58:59 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:07.808 12:58:59 -- spdk/autotest.sh@32 -- # uname -s 00:03:07.808 12:58:59 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:07.808 12:58:59 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:07.808 12:58:59 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:07.808 12:58:59 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:07.808 12:58:59 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:07.808 12:58:59 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:07.808 12:58:59 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:07.808 12:58:59 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:07.808 12:58:59 -- spdk/autotest.sh@48 -- # udevadm_pid=53812 00:03:07.808 12:58:59 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:07.808 12:58:59 -- pm/common@17 -- # local monitor 00:03:07.808 12:58:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.808 12:58:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.808 12:58:59 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:07.808 12:58:59 -- pm/common@25 -- # sleep 1 00:03:07.808 12:58:59 -- pm/common@21 -- # date +%s 00:03:07.808 12:58:59 -- pm/common@21 -- # date +%s 00:03:07.808 12:58:59 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721912339 00:03:07.808 12:58:59 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721912339 00:03:07.808 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721912339_collect-vmstat.pm.log 00:03:07.808 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721912339_collect-cpu-load.pm.log 00:03:08.373 12:59:00 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:08.373 12:59:00 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:08.373 12:59:00 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:08.373 12:59:00 -- common/autotest_common.sh@10 -- # set +x 00:03:08.374 12:59:00 -- spdk/autotest.sh@59 -- # create_test_list 00:03:08.374 12:59:00 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:08.374 12:59:00 -- common/autotest_common.sh@10 -- # set +x 00:03:08.374 12:59:00 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:08.374 12:59:00 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:08.632 12:59:00 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:08.632 12:59:00 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:08.632 12:59:00 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:08.632 12:59:00 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:08.632 12:59:00 -- common/autotest_common.sh@1455 -- # uname 00:03:08.632 12:59:00 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:08.632 12:59:00 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:08.632 12:59:00 -- common/autotest_common.sh@1475 -- # uname 00:03:08.632 12:59:00 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:08.632 12:59:00 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:08.632 12:59:00 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:08.632 12:59:00 -- spdk/autotest.sh@72 -- # hash lcov 00:03:08.632 12:59:00 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:08.632 12:59:00 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:08.632 --rc lcov_branch_coverage=1 00:03:08.632 --rc lcov_function_coverage=1 00:03:08.632 --rc genhtml_branch_coverage=1 00:03:08.632 --rc genhtml_function_coverage=1 00:03:08.632 --rc genhtml_legend=1 00:03:08.632 --rc geninfo_all_blocks=1 00:03:08.632 ' 00:03:08.632 12:59:00 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:08.632 --rc lcov_branch_coverage=1 00:03:08.632 --rc lcov_function_coverage=1 00:03:08.632 --rc genhtml_branch_coverage=1 00:03:08.632 --rc genhtml_function_coverage=1 00:03:08.632 --rc genhtml_legend=1 00:03:08.633 --rc geninfo_all_blocks=1 00:03:08.633 ' 00:03:08.633 12:59:00 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:08.633 --rc lcov_branch_coverage=1 00:03:08.633 --rc lcov_function_coverage=1 00:03:08.633 --rc genhtml_branch_coverage=1 00:03:08.633 --rc genhtml_function_coverage=1 00:03:08.633 --rc genhtml_legend=1 00:03:08.633 --rc geninfo_all_blocks=1 00:03:08.633 --no-external' 00:03:08.633 12:59:00 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:08.633 --rc lcov_branch_coverage=1 00:03:08.633 --rc lcov_function_coverage=1 00:03:08.633 --rc genhtml_branch_coverage=1 00:03:08.633 --rc genhtml_function_coverage=1 00:03:08.633 --rc genhtml_legend=1 00:03:08.633 --rc geninfo_all_blocks=1 00:03:08.633 --no-external' 00:03:08.633 12:59:00 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:08.633 lcov: LCOV version 1.14 00:03:08.633 12:59:00 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:23.507 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:23.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:35.709 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:35.709 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:35.709 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:35.709 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:35.709 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:35.709 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:35.709 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:35.709 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:35.709 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:35.709 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:35.709 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:35.709 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:35.709 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:35.709 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:35.709 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:35.709 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:35.709 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:35.709 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:35.709 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:35.709 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:35.709 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:35.709 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:35.709 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:35.709 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:35.709 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:35.709 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:35.709 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:35.709 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:35.709 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:35.709 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:35.709 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:35.709 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:35.709 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:35.709 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:35.709 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:35.709 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:35.709 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:35.709 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:35.709 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:35.709 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:35.709 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:35.709 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:35.709 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:35.709 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:35.709 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:35.709 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:35.709 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:35.709 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:35.709 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:35.709 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:35.709 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:35.709 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:35.709 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:35.709 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:35.709 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:35.709 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:35.709 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:35.709 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:35.709 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:35.709 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:35.709 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:35.709 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:35.709 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:35.709 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:35.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:35.710 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:35.711 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:35.711 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:35.711 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:35.711 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:35.711 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:35.711 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:35.711 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:35.711 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:35.711 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:35.711 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:35.711 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:37.627 12:59:29 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:37.627 12:59:29 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:37.627 12:59:29 -- common/autotest_common.sh@10 -- # set +x 00:03:37.627 12:59:29 -- spdk/autotest.sh@91 -- # rm -f 00:03:37.627 12:59:29 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:37.920 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:38.488 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:38.488 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:38.488 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:03:38.488 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:03:38.488 12:59:30 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:38.488 12:59:30 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:38.488 12:59:30 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:38.488 12:59:30 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:38.488 12:59:30 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:38.488 12:59:30 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:38.488 12:59:30 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:38.488 12:59:30 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:38.488 12:59:30 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:38.488 12:59:30 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:38.488 12:59:30 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:38.488 12:59:30 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:38.488 12:59:30 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:38.488 12:59:30 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:38.488 12:59:30 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:38.488 12:59:30 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:03:38.488 12:59:30 -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:03:38.488 12:59:30 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:38.488 12:59:30 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:38.488 12:59:30 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:38.488 12:59:30 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:03:38.488 12:59:30 -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:03:38.488 12:59:30 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:38.488 12:59:30 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:38.488 12:59:30 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:38.488 12:59:30 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:03:38.488 12:59:30 -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:03:38.488 12:59:30 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:38.488 12:59:30 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:38.488 12:59:30 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:38.488 12:59:30 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:03:38.488 12:59:30 -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:03:38.488 12:59:30 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:38.488 12:59:30 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:38.488 12:59:30 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:38.489 12:59:30 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:03:38.489 12:59:30 -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:03:38.489 12:59:30 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:38.489 12:59:30 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:38.489 12:59:30 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:38.489 12:59:30 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:38.489 12:59:30 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:38.489 12:59:30 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:38.489 12:59:30 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:38.489 12:59:30 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:38.747 No valid GPT data, bailing 00:03:38.747 12:59:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:38.747 12:59:30 -- scripts/common.sh@391 -- # pt= 00:03:38.747 12:59:30 -- scripts/common.sh@392 -- # return 1 00:03:38.747 12:59:30 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:38.747 1+0 records in 00:03:38.747 1+0 records out 00:03:38.747 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136841 s, 76.6 MB/s 00:03:38.747 12:59:30 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:38.747 12:59:30 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:38.747 12:59:30 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:38.747 12:59:30 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:38.747 12:59:30 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:38.747 No valid GPT data, bailing 00:03:38.747 12:59:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:38.747 12:59:30 -- scripts/common.sh@391 -- # pt= 00:03:38.747 12:59:30 -- scripts/common.sh@392 -- # return 1 00:03:38.747 12:59:30 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:38.747 1+0 records in 00:03:38.747 1+0 records out 00:03:38.747 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00402962 s, 260 MB/s 00:03:38.747 12:59:30 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:38.747 12:59:30 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:38.747 12:59:30 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:03:38.747 12:59:30 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:03:38.747 12:59:30 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:38.747 No valid GPT data, bailing 00:03:38.747 12:59:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:38.747 12:59:30 -- scripts/common.sh@391 -- # pt= 00:03:38.747 12:59:30 -- scripts/common.sh@392 -- # return 1 00:03:38.747 12:59:30 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:38.747 1+0 records in 00:03:38.747 1+0 records out 00:03:38.747 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00454233 s, 231 MB/s 00:03:38.747 12:59:30 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:38.747 12:59:30 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:38.748 12:59:30 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n2 00:03:38.748 12:59:30 -- scripts/common.sh@378 -- # local block=/dev/nvme2n2 pt 00:03:38.748 12:59:30 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:03:39.006 No valid GPT data, bailing 00:03:39.006 12:59:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:03:39.006 12:59:30 -- scripts/common.sh@391 -- # pt= 00:03:39.006 12:59:30 -- scripts/common.sh@392 -- # return 1 00:03:39.006 12:59:30 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:03:39.006 1+0 records in 00:03:39.006 1+0 records out 00:03:39.006 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00428123 s, 245 MB/s 00:03:39.006 12:59:30 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:39.006 12:59:30 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:39.006 12:59:30 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n3 00:03:39.006 12:59:30 -- scripts/common.sh@378 -- # local block=/dev/nvme2n3 pt 00:03:39.006 12:59:30 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:03:39.006 No valid GPT data, bailing 00:03:39.006 12:59:31 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:03:39.006 12:59:31 -- scripts/common.sh@391 -- # pt= 00:03:39.006 12:59:31 -- scripts/common.sh@392 -- # return 1 00:03:39.006 12:59:31 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:03:39.006 1+0 records in 00:03:39.006 1+0 records out 00:03:39.006 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00412466 s, 254 MB/s 00:03:39.006 12:59:31 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:39.006 12:59:31 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:39.006 12:59:31 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:03:39.006 12:59:31 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:03:39.006 12:59:31 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:39.006 No valid GPT data, bailing 00:03:39.006 12:59:31 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:39.006 12:59:31 -- scripts/common.sh@391 -- # pt= 00:03:39.006 12:59:31 -- scripts/common.sh@392 -- # return 1 00:03:39.006 12:59:31 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:39.006 1+0 records in 00:03:39.006 1+0 records out 00:03:39.006 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00441878 s, 237 MB/s 00:03:39.006 12:59:31 -- spdk/autotest.sh@118 -- # sync 00:03:39.265 12:59:31 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:39.265 12:59:31 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:39.265 12:59:31 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:41.170 12:59:32 -- spdk/autotest.sh@124 -- # uname -s 00:03:41.170 12:59:32 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:41.170 12:59:32 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:41.170 12:59:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:41.170 12:59:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:41.170 12:59:32 -- common/autotest_common.sh@10 -- # set +x 00:03:41.170 ************************************ 00:03:41.170 START TEST setup.sh 00:03:41.170 ************************************ 00:03:41.170 12:59:32 setup.sh -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:41.170 * Looking for test storage... 00:03:41.170 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:41.170 12:59:33 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:41.170 12:59:33 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:41.170 12:59:33 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:41.170 12:59:33 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:41.170 12:59:33 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:41.170 12:59:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:41.170 ************************************ 00:03:41.170 START TEST acl 00:03:41.170 ************************************ 00:03:41.170 12:59:33 setup.sh.acl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:41.170 * Looking for test storage... 00:03:41.170 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:41.170 12:59:33 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:41.170 12:59:33 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:41.170 12:59:33 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:41.170 12:59:33 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:41.171 12:59:33 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:41.171 12:59:33 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:41.171 12:59:33 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:41.171 12:59:33 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:41.171 12:59:33 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:41.171 12:59:33 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:41.171 12:59:33 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:41.171 12:59:33 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:41.171 12:59:33 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:41.171 12:59:33 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:41.171 12:59:33 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:41.171 12:59:33 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:03:41.171 12:59:33 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:03:41.171 12:59:33 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:41.171 12:59:33 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:41.171 12:59:33 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:41.171 12:59:33 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:03:41.171 12:59:33 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:03:41.171 12:59:33 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:41.171 12:59:33 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:41.171 12:59:33 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:41.171 12:59:33 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:03:41.171 12:59:33 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:03:41.171 12:59:33 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:41.171 12:59:33 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:41.171 12:59:33 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:41.171 12:59:33 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:03:41.171 12:59:33 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:03:41.171 12:59:33 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:41.171 12:59:33 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:41.171 12:59:33 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:41.171 12:59:33 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:03:41.171 12:59:33 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:03:41.171 12:59:33 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:41.171 12:59:33 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:41.171 12:59:33 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:41.171 12:59:33 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:41.171 12:59:33 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:41.171 12:59:33 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:41.171 12:59:33 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:41.171 12:59:33 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:41.171 12:59:33 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:42.105 12:59:34 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:42.105 12:59:34 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:42.105 12:59:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.105 12:59:34 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:42.105 12:59:34 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.105 12:59:34 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:42.672 12:59:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:03:42.672 12:59:34 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:42.672 12:59:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.240 Hugepages 00:03:43.240 node hugesize free / total 00:03:43.240 12:59:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:43.240 12:59:35 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:43.240 12:59:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.240 00:03:43.240 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:43.240 12:59:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:43.240 12:59:35 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:43.240 12:59:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.240 12:59:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:43.240 12:59:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:43.240 12:59:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:43.240 12:59:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.240 12:59:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:03:43.240 12:59:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:43.240 12:59:35 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:43.240 12:59:35 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:43.240 12:59:35 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:43.240 12:59:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.240 12:59:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:03:43.240 12:59:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:43.240 12:59:35 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:43.240 12:59:35 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:43.240 12:59:35 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:43.240 12:59:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.498 12:59:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:12.0 == *:*:*.* ]] 00:03:43.498 12:59:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:43.498 12:59:35 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:03:43.498 12:59:35 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:43.498 12:59:35 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:43.498 12:59:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.498 12:59:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:13.0 == *:*:*.* ]] 00:03:43.499 12:59:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:43.499 12:59:35 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:03:43.499 12:59:35 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:43.499 12:59:35 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:43.499 12:59:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.499 12:59:35 setup.sh.acl -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:03:43.499 12:59:35 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:43.499 12:59:35 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:43.499 12:59:35 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:43.499 12:59:35 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:43.499 ************************************ 00:03:43.499 START TEST denied 00:03:43.499 ************************************ 00:03:43.499 12:59:35 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:03:43.499 12:59:35 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:03:43.499 12:59:35 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:43.499 12:59:35 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.499 12:59:35 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:03:43.499 12:59:35 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:44.874 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:03:44.874 12:59:36 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:03:44.874 12:59:36 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:44.874 12:59:36 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:44.874 12:59:36 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:03:44.874 12:59:36 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:03:44.874 12:59:36 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:44.874 12:59:36 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:44.874 12:59:36 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:44.874 12:59:36 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:44.874 12:59:36 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:51.437 00:03:51.437 real 0m7.133s 00:03:51.437 user 0m0.819s 00:03:51.437 sys 0m1.325s 00:03:51.437 12:59:42 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:51.437 12:59:42 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:51.437 ************************************ 00:03:51.437 END TEST denied 00:03:51.437 ************************************ 00:03:51.437 12:59:42 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:51.437 12:59:42 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:51.437 12:59:42 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:51.437 12:59:42 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:51.437 ************************************ 00:03:51.437 START TEST allowed 00:03:51.437 ************************************ 00:03:51.437 12:59:42 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:03:51.437 12:59:42 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:03:51.437 12:59:42 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:51.437 12:59:42 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:03:51.437 12:59:42 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.437 12:59:42 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:51.695 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:51.695 12:59:43 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:03:51.695 12:59:43 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:51.695 12:59:43 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:51.695 12:59:43 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:03:51.695 12:59:43 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:03:51.695 12:59:43 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:51.695 12:59:43 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:51.695 12:59:43 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:51.695 12:59:43 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:12.0 ]] 00:03:51.695 12:59:43 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:12.0/driver 00:03:51.695 12:59:43 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:51.695 12:59:43 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:51.695 12:59:43 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:51.695 12:59:43 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:13.0 ]] 00:03:51.695 12:59:43 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:13.0/driver 00:03:51.695 12:59:43 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:51.695 12:59:43 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:51.695 12:59:43 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:51.695 12:59:43 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:51.695 12:59:43 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:53.071 00:03:53.071 real 0m2.165s 00:03:53.071 user 0m0.982s 00:03:53.071 sys 0m1.169s 00:03:53.071 12:59:44 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:53.071 12:59:44 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:53.071 ************************************ 00:03:53.071 END TEST allowed 00:03:53.071 ************************************ 00:03:53.071 00:03:53.071 real 0m11.894s 00:03:53.071 user 0m2.989s 00:03:53.071 sys 0m3.903s 00:03:53.071 12:59:44 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:53.071 12:59:44 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:53.071 ************************************ 00:03:53.071 END TEST acl 00:03:53.071 ************************************ 00:03:53.071 12:59:44 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:53.071 12:59:44 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:53.071 12:59:44 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:53.071 12:59:44 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:53.071 ************************************ 00:03:53.071 START TEST hugepages 00:03:53.071 ************************************ 00:03:53.071 12:59:45 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:53.071 * Looking for test storage... 00:03:53.071 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:53.071 12:59:45 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:53.071 12:59:45 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:53.071 12:59:45 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:53.071 12:59:45 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:53.071 12:59:45 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:53.071 12:59:45 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:53.071 12:59:45 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:53.071 12:59:45 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:53.071 12:59:45 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:53.071 12:59:45 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:53.071 12:59:45 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.071 12:59:45 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.071 12:59:45 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.071 12:59:45 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.071 12:59:45 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.071 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 5788948 kB' 'MemAvailable: 7390488 kB' 'Buffers: 2436 kB' 'Cached: 1814780 kB' 'SwapCached: 0 kB' 'Active: 444396 kB' 'Inactive: 1474720 kB' 'Active(anon): 112412 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474720 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 103612 kB' 'Mapped: 48664 kB' 'Shmem: 10512 kB' 'KReclaimable: 63552 kB' 'Slab: 136212 kB' 'SReclaimable: 63552 kB' 'SUnreclaim: 72660 kB' 'KernelStack: 6360 kB' 'PageTables: 4020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 327224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.072 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:53.073 12:59:45 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:53.073 12:59:45 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:53.073 12:59:45 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:53.073 12:59:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:53.073 ************************************ 00:03:53.073 START TEST default_setup 00:03:53.073 ************************************ 00:03:53.073 12:59:45 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:03:53.073 12:59:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:53.073 12:59:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:53.073 12:59:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:53.074 12:59:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:53.074 12:59:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:53.074 12:59:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:53.074 12:59:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:53.074 12:59:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:53.074 12:59:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:53.074 12:59:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:53.074 12:59:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:53.074 12:59:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:53.074 12:59:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:53.074 12:59:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:53.074 12:59:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:53.074 12:59:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:53.074 12:59:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:53.074 12:59:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:53.074 12:59:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:53.074 12:59:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:53.074 12:59:45 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.074 12:59:45 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:53.641 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:54.210 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:03:54.210 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:54.210 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:54.210 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7888728 kB' 'MemAvailable: 9490052 kB' 'Buffers: 2436 kB' 'Cached: 1814764 kB' 'SwapCached: 0 kB' 'Active: 463052 kB' 'Inactive: 1474736 kB' 'Active(anon): 131068 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474736 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 122208 kB' 'Mapped: 48688 kB' 'Shmem: 10476 kB' 'KReclaimable: 63084 kB' 'Slab: 135600 kB' 'SReclaimable: 63084 kB' 'SUnreclaim: 72516 kB' 'KernelStack: 6368 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.210 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.211 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:54.475 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7888728 kB' 'MemAvailable: 9490052 kB' 'Buffers: 2436 kB' 'Cached: 1814760 kB' 'SwapCached: 0 kB' 'Active: 462648 kB' 'Inactive: 1474736 kB' 'Active(anon): 130664 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474736 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 121796 kB' 'Mapped: 48560 kB' 'Shmem: 10472 kB' 'KReclaimable: 63084 kB' 'Slab: 135604 kB' 'SReclaimable: 63084 kB' 'SUnreclaim: 72520 kB' 'KernelStack: 6352 kB' 'PageTables: 4036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.476 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.477 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7888728 kB' 'MemAvailable: 9490052 kB' 'Buffers: 2436 kB' 'Cached: 1814760 kB' 'SwapCached: 0 kB' 'Active: 462856 kB' 'Inactive: 1474736 kB' 'Active(anon): 130872 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474736 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 122028 kB' 'Mapped: 48820 kB' 'Shmem: 10472 kB' 'KReclaimable: 63084 kB' 'Slab: 135604 kB' 'SReclaimable: 63084 kB' 'SUnreclaim: 72520 kB' 'KernelStack: 6388 kB' 'PageTables: 3972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.478 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:54.479 nr_hugepages=1024 00:03:54.479 resv_hugepages=0 00:03:54.479 surplus_hugepages=0 00:03:54.479 anon_hugepages=0 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.479 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7888728 kB' 'MemAvailable: 9490056 kB' 'Buffers: 2436 kB' 'Cached: 1814760 kB' 'SwapCached: 0 kB' 'Active: 462396 kB' 'Inactive: 1474740 kB' 'Active(anon): 130412 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474740 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 121516 kB' 'Mapped: 48560 kB' 'Shmem: 10472 kB' 'KReclaimable: 63084 kB' 'Slab: 135608 kB' 'SReclaimable: 63084 kB' 'SUnreclaim: 72524 kB' 'KernelStack: 6352 kB' 'PageTables: 4032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.480 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:54.481 12:59:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7888728 kB' 'MemUsed: 4353252 kB' 'SwapCached: 0 kB' 'Active: 462604 kB' 'Inactive: 1474740 kB' 'Active(anon): 130620 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474740 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'FilePages: 1817196 kB' 'Mapped: 48560 kB' 'AnonPages: 121768 kB' 'Shmem: 10472 kB' 'KernelStack: 6352 kB' 'PageTables: 4032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63084 kB' 'Slab: 135608 kB' 'SReclaimable: 63084 kB' 'SUnreclaim: 72524 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.482 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:54.483 node0=1024 expecting 1024 00:03:54.483 ************************************ 00:03:54.483 END TEST default_setup 00:03:54.483 ************************************ 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:54.483 00:03:54.483 real 0m1.376s 00:03:54.483 user 0m0.635s 00:03:54.483 sys 0m0.699s 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:54.483 12:59:46 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:54.483 12:59:46 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:54.483 12:59:46 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:54.483 12:59:46 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:54.483 12:59:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:54.483 ************************************ 00:03:54.483 START TEST per_node_1G_alloc 00:03:54.483 ************************************ 00:03:54.483 12:59:46 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:03:54.483 12:59:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:54.483 12:59:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:54.483 12:59:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:54.483 12:59:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:54.483 12:59:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:54.483 12:59:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:54.483 12:59:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:54.483 12:59:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:54.483 12:59:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:54.483 12:59:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:54.483 12:59:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:54.483 12:59:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:54.483 12:59:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:54.483 12:59:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:54.483 12:59:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:54.483 12:59:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:54.483 12:59:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:54.483 12:59:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:54.483 12:59:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:54.483 12:59:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:54.483 12:59:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:54.483 12:59:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:54.483 12:59:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:54.483 12:59:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.483 12:59:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:55.052 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:55.052 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:55.052 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:55.052 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:55.052 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:55.052 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:55.052 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:55.052 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:55.052 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:55.052 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:55.052 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8938676 kB' 'MemAvailable: 10540008 kB' 'Buffers: 2436 kB' 'Cached: 1814764 kB' 'SwapCached: 0 kB' 'Active: 463036 kB' 'Inactive: 1474744 kB' 'Active(anon): 131052 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 122172 kB' 'Mapped: 48652 kB' 'Shmem: 10472 kB' 'KReclaimable: 63084 kB' 'Slab: 135536 kB' 'SReclaimable: 63084 kB' 'SUnreclaim: 72452 kB' 'KernelStack: 6328 kB' 'PageTables: 3884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 349520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.053 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8938792 kB' 'MemAvailable: 10540124 kB' 'Buffers: 2436 kB' 'Cached: 1814764 kB' 'SwapCached: 0 kB' 'Active: 462764 kB' 'Inactive: 1474744 kB' 'Active(anon): 130780 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 121888 kB' 'Mapped: 48564 kB' 'Shmem: 10472 kB' 'KReclaimable: 63084 kB' 'Slab: 135552 kB' 'SReclaimable: 63084 kB' 'SUnreclaim: 72468 kB' 'KernelStack: 6368 kB' 'PageTables: 4076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 349520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.054 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.055 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8938792 kB' 'MemAvailable: 10540124 kB' 'Buffers: 2436 kB' 'Cached: 1814764 kB' 'SwapCached: 0 kB' 'Active: 462748 kB' 'Inactive: 1474744 kB' 'Active(anon): 130764 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 121852 kB' 'Mapped: 48564 kB' 'Shmem: 10472 kB' 'KReclaimable: 63084 kB' 'Slab: 135552 kB' 'SReclaimable: 63084 kB' 'SUnreclaim: 72468 kB' 'KernelStack: 6368 kB' 'PageTables: 4064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 349520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.056 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.057 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.058 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.317 nr_hugepages=512 00:03:55.317 resv_hugepages=0 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:55.317 surplus_hugepages=0 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:55.317 anon_hugepages=0 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8938792 kB' 'MemAvailable: 10540124 kB' 'Buffers: 2436 kB' 'Cached: 1814764 kB' 'SwapCached: 0 kB' 'Active: 462764 kB' 'Inactive: 1474744 kB' 'Active(anon): 130780 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 121880 kB' 'Mapped: 48564 kB' 'Shmem: 10472 kB' 'KReclaimable: 63084 kB' 'Slab: 135552 kB' 'SReclaimable: 63084 kB' 'SUnreclaim: 72468 kB' 'KernelStack: 6352 kB' 'PageTables: 4028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 349520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.317 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.318 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8938792 kB' 'MemUsed: 3303188 kB' 'SwapCached: 0 kB' 'Active: 462652 kB' 'Inactive: 1474744 kB' 'Active(anon): 130668 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'FilePages: 1817200 kB' 'Mapped: 48564 kB' 'AnonPages: 121776 kB' 'Shmem: 10472 kB' 'KernelStack: 6352 kB' 'PageTables: 4028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63084 kB' 'Slab: 135552 kB' 'SReclaimable: 63084 kB' 'SUnreclaim: 72468 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.319 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.320 node0=512 expecting 512 00:03:55.320 ************************************ 00:03:55.320 END TEST per_node_1G_alloc 00:03:55.320 ************************************ 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:55.320 00:03:55.320 real 0m0.720s 00:03:55.320 user 0m0.322s 00:03:55.320 sys 0m0.413s 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:55.320 12:59:47 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:55.320 12:59:47 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:55.320 12:59:47 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:55.320 12:59:47 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:55.320 12:59:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:55.320 ************************************ 00:03:55.320 START TEST even_2G_alloc 00:03:55.320 ************************************ 00:03:55.320 12:59:47 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:03:55.320 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:55.320 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:55.321 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:55.321 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:55.321 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:55.321 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:55.321 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:55.321 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.321 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:55.321 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:55.321 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.321 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.321 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:55.321 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:55.321 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.321 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:55.321 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:55.321 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:55.321 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.321 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:55.321 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:55.321 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:55.321 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.321 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:55.579 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:55.844 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:55.844 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:55.844 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:55.844 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7885588 kB' 'MemAvailable: 9486920 kB' 'Buffers: 2436 kB' 'Cached: 1814764 kB' 'SwapCached: 0 kB' 'Active: 463244 kB' 'Inactive: 1474744 kB' 'Active(anon): 131260 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 122400 kB' 'Mapped: 48520 kB' 'Shmem: 10472 kB' 'KReclaimable: 63084 kB' 'Slab: 135556 kB' 'SReclaimable: 63084 kB' 'SUnreclaim: 72472 kB' 'KernelStack: 6440 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.845 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7885588 kB' 'MemAvailable: 9486920 kB' 'Buffers: 2436 kB' 'Cached: 1814764 kB' 'SwapCached: 0 kB' 'Active: 462492 kB' 'Inactive: 1474744 kB' 'Active(anon): 130508 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 121868 kB' 'Mapped: 48564 kB' 'Shmem: 10472 kB' 'KReclaimable: 63084 kB' 'Slab: 135572 kB' 'SReclaimable: 63084 kB' 'SUnreclaim: 72488 kB' 'KernelStack: 6368 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.846 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.847 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.848 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7885588 kB' 'MemAvailable: 9486920 kB' 'Buffers: 2436 kB' 'Cached: 1814764 kB' 'SwapCached: 0 kB' 'Active: 462388 kB' 'Inactive: 1474744 kB' 'Active(anon): 130404 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 121764 kB' 'Mapped: 48564 kB' 'Shmem: 10472 kB' 'KReclaimable: 63084 kB' 'Slab: 135572 kB' 'SReclaimable: 63084 kB' 'SUnreclaim: 72488 kB' 'KernelStack: 6336 kB' 'PageTables: 3976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.849 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:55.850 nr_hugepages=1024 00:03:55.850 resv_hugepages=0 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:55.850 surplus_hugepages=0 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:55.850 anon_hugepages=0 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:55.850 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:55.851 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.851 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.851 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.851 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.851 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.851 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.851 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.851 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7885600 kB' 'MemAvailable: 9486932 kB' 'Buffers: 2436 kB' 'Cached: 1814764 kB' 'SwapCached: 0 kB' 'Active: 462280 kB' 'Inactive: 1474744 kB' 'Active(anon): 130296 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 121396 kB' 'Mapped: 48564 kB' 'Shmem: 10472 kB' 'KReclaimable: 63084 kB' 'Slab: 135568 kB' 'SReclaimable: 63084 kB' 'SUnreclaim: 72484 kB' 'KernelStack: 6320 kB' 'PageTables: 3908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:55.851 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.851 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.851 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.851 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.851 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.851 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.851 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.851 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.851 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 12:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.851 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.852 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:56.112 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:56.112 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.112 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.112 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.112 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7885544 kB' 'MemUsed: 4356436 kB' 'SwapCached: 0 kB' 'Active: 462756 kB' 'Inactive: 1474744 kB' 'Active(anon): 130772 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'FilePages: 1817200 kB' 'Mapped: 48564 kB' 'AnonPages: 121924 kB' 'Shmem: 10472 kB' 'KernelStack: 6368 kB' 'PageTables: 4076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63084 kB' 'Slab: 135568 kB' 'SReclaimable: 63084 kB' 'SUnreclaim: 72484 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:56.112 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.112 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.112 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.112 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.112 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.112 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.112 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.112 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.112 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.112 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.112 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.112 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.112 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.112 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.112 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.112 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.112 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.112 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.112 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.112 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.112 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.112 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.112 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.112 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.112 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.112 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.113 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.114 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.114 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.114 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.114 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.114 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.114 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.114 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.114 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:56.114 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:56.114 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:56.114 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:56.114 node0=1024 expecting 1024 00:03:56.114 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:56.114 12:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:56.114 00:03:56.114 real 0m0.684s 00:03:56.114 user 0m0.347s 00:03:56.114 sys 0m0.370s 00:03:56.114 12:59:48 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:56.114 12:59:48 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:56.114 ************************************ 00:03:56.114 END TEST even_2G_alloc 00:03:56.114 ************************************ 00:03:56.114 12:59:48 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:56.114 12:59:48 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:56.114 12:59:48 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:56.114 12:59:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:56.114 ************************************ 00:03:56.114 START TEST odd_alloc 00:03:56.114 ************************************ 00:03:56.114 12:59:48 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:03:56.114 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:56.114 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:56.114 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:56.114 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:56.114 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:56.114 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:56.114 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:56.114 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:56.114 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:56.114 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:56.114 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:56.114 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:56.114 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:56.114 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:56.114 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.114 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:03:56.114 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:56.114 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:56.114 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.114 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:56.114 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:56.114 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:56.114 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.114 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:56.372 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:56.640 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:56.640 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:56.640 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:56.640 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7879924 kB' 'MemAvailable: 9481256 kB' 'Buffers: 2436 kB' 'Cached: 1814764 kB' 'SwapCached: 0 kB' 'Active: 462948 kB' 'Inactive: 1474744 kB' 'Active(anon): 130964 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 122392 kB' 'Mapped: 48712 kB' 'Shmem: 10472 kB' 'KReclaimable: 63084 kB' 'Slab: 135536 kB' 'SReclaimable: 63084 kB' 'SUnreclaim: 72452 kB' 'KernelStack: 6392 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 349520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.640 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.641 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7879672 kB' 'MemAvailable: 9481004 kB' 'Buffers: 2436 kB' 'Cached: 1814764 kB' 'SwapCached: 0 kB' 'Active: 462672 kB' 'Inactive: 1474744 kB' 'Active(anon): 130688 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 121828 kB' 'Mapped: 48564 kB' 'Shmem: 10472 kB' 'KReclaimable: 63084 kB' 'Slab: 135552 kB' 'SReclaimable: 63084 kB' 'SUnreclaim: 72468 kB' 'KernelStack: 6368 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 349520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.642 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.643 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7879672 kB' 'MemAvailable: 9481004 kB' 'Buffers: 2436 kB' 'Cached: 1814764 kB' 'SwapCached: 0 kB' 'Active: 462472 kB' 'Inactive: 1474744 kB' 'Active(anon): 130488 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 121636 kB' 'Mapped: 48564 kB' 'Shmem: 10472 kB' 'KReclaimable: 63084 kB' 'Slab: 135556 kB' 'SReclaimable: 63084 kB' 'SUnreclaim: 72472 kB' 'KernelStack: 6368 kB' 'PageTables: 4084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 349520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.644 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.645 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:56.646 nr_hugepages=1025 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:56.646 resv_hugepages=0 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:56.646 surplus_hugepages=0 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:56.646 anon_hugepages=0 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7879672 kB' 'MemAvailable: 9481004 kB' 'Buffers: 2436 kB' 'Cached: 1814764 kB' 'SwapCached: 0 kB' 'Active: 462472 kB' 'Inactive: 1474744 kB' 'Active(anon): 130488 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 121636 kB' 'Mapped: 48564 kB' 'Shmem: 10472 kB' 'KReclaimable: 63084 kB' 'Slab: 135556 kB' 'SReclaimable: 63084 kB' 'SUnreclaim: 72472 kB' 'KernelStack: 6368 kB' 'PageTables: 4084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 349520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.646 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.647 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7879672 kB' 'MemUsed: 4362308 kB' 'SwapCached: 0 kB' 'Active: 462720 kB' 'Inactive: 1474744 kB' 'Active(anon): 130736 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 1817200 kB' 'Mapped: 48564 kB' 'AnonPages: 121656 kB' 'Shmem: 10472 kB' 'KernelStack: 6352 kB' 'PageTables: 4044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63084 kB' 'Slab: 135552 kB' 'SReclaimable: 63084 kB' 'SUnreclaim: 72468 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.648 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:56.649 node0=1025 expecting 1025 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:03:56.649 00:03:56.649 real 0m0.667s 00:03:56.649 user 0m0.317s 00:03:56.649 sys 0m0.397s 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:56.649 12:59:48 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:56.649 ************************************ 00:03:56.649 END TEST odd_alloc 00:03:56.649 ************************************ 00:03:56.649 12:59:48 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:56.649 12:59:48 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:56.649 12:59:48 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:56.649 12:59:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:56.940 ************************************ 00:03:56.940 START TEST custom_alloc 00:03:56.940 ************************************ 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.940 12:59:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:57.200 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:57.200 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:57.200 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:57.200 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:57.200 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:57.200 12:59:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:03:57.200 12:59:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:57.200 12:59:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:57.200 12:59:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:57.200 12:59:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:57.200 12:59:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:57.200 12:59:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:57.200 12:59:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:57.200 12:59:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:57.200 12:59:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:57.200 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:57.200 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:57.200 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:57.200 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.200 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.200 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.200 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.200 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.200 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.200 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8937384 kB' 'MemAvailable: 10538716 kB' 'Buffers: 2436 kB' 'Cached: 1814764 kB' 'SwapCached: 0 kB' 'Active: 463156 kB' 'Inactive: 1474744 kB' 'Active(anon): 131172 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122288 kB' 'Mapped: 48624 kB' 'Shmem: 10472 kB' 'KReclaimable: 63084 kB' 'Slab: 135576 kB' 'SReclaimable: 63084 kB' 'SUnreclaim: 72492 kB' 'KernelStack: 6384 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 349652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:57.200 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.200 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.200 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.202 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.202 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.202 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.202 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8937132 kB' 'MemAvailable: 10538464 kB' 'Buffers: 2436 kB' 'Cached: 1814764 kB' 'SwapCached: 0 kB' 'Active: 462512 kB' 'Inactive: 1474744 kB' 'Active(anon): 130528 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 121668 kB' 'Mapped: 48564 kB' 'Shmem: 10472 kB' 'KReclaimable: 63084 kB' 'Slab: 135572 kB' 'SReclaimable: 63084 kB' 'SUnreclaim: 72488 kB' 'KernelStack: 6368 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 349652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8937132 kB' 'MemAvailable: 10538464 kB' 'Buffers: 2436 kB' 'Cached: 1814764 kB' 'SwapCached: 0 kB' 'Active: 462768 kB' 'Inactive: 1474744 kB' 'Active(anon): 130784 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 121916 kB' 'Mapped: 48564 kB' 'Shmem: 10472 kB' 'KReclaimable: 63084 kB' 'Slab: 135572 kB' 'SReclaimable: 63084 kB' 'SUnreclaim: 72488 kB' 'KernelStack: 6368 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 349652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.467 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.468 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:57.469 nr_hugepages=512 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:57.469 resv_hugepages=0 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:57.469 surplus_hugepages=0 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:57.469 anon_hugepages=0 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8937132 kB' 'MemAvailable: 10538464 kB' 'Buffers: 2436 kB' 'Cached: 1814764 kB' 'SwapCached: 0 kB' 'Active: 462772 kB' 'Inactive: 1474744 kB' 'Active(anon): 130788 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 121932 kB' 'Mapped: 48564 kB' 'Shmem: 10472 kB' 'KReclaimable: 63084 kB' 'Slab: 135572 kB' 'SReclaimable: 63084 kB' 'SUnreclaim: 72488 kB' 'KernelStack: 6368 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 349652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.469 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.470 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8937132 kB' 'MemUsed: 3304848 kB' 'SwapCached: 0 kB' 'Active: 462736 kB' 'Inactive: 1474744 kB' 'Active(anon): 130752 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 1817200 kB' 'Mapped: 48564 kB' 'AnonPages: 121856 kB' 'Shmem: 10472 kB' 'KernelStack: 6352 kB' 'PageTables: 4032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63084 kB' 'Slab: 135572 kB' 'SReclaimable: 63084 kB' 'SUnreclaim: 72488 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.471 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.472 node0=512 expecting 512 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:57.472 00:03:57.472 real 0m0.685s 00:03:57.472 user 0m0.324s 00:03:57.472 sys 0m0.407s 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:57.472 12:59:49 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:57.472 ************************************ 00:03:57.472 END TEST custom_alloc 00:03:57.472 ************************************ 00:03:57.472 12:59:49 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:57.472 12:59:49 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:57.472 12:59:49 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:57.472 12:59:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:57.472 ************************************ 00:03:57.472 START TEST no_shrink_alloc 00:03:57.472 ************************************ 00:03:57.472 12:59:49 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:03:57.472 12:59:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:57.472 12:59:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:57.472 12:59:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:57.472 12:59:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:57.472 12:59:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:57.472 12:59:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:57.472 12:59:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:57.472 12:59:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:57.472 12:59:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:57.472 12:59:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:57.472 12:59:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:57.472 12:59:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:57.472 12:59:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:57.472 12:59:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:57.473 12:59:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:57.473 12:59:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:57.473 12:59:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:57.473 12:59:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:57.473 12:59:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:57.473 12:59:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:57.473 12:59:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.473 12:59:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:57.731 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:57.992 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:57.992 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:57.992 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:57.992 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:57.992 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:57.992 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:57.992 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:57.992 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:57.992 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:57.992 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:57.992 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:57.992 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:57.992 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:57.992 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:57.992 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:57.992 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:57.992 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.992 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.992 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.992 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.992 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.992 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.992 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.992 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.992 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7887524 kB' 'MemAvailable: 9488856 kB' 'Buffers: 2436 kB' 'Cached: 1814768 kB' 'SwapCached: 0 kB' 'Active: 459732 kB' 'Inactive: 1474748 kB' 'Active(anon): 127748 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474748 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 118816 kB' 'Mapped: 47956 kB' 'Shmem: 10472 kB' 'KReclaimable: 63076 kB' 'Slab: 135388 kB' 'SReclaimable: 63076 kB' 'SUnreclaim: 72312 kB' 'KernelStack: 6248 kB' 'PageTables: 3752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:57.992 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.992 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.992 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.992 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.992 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.992 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.992 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.992 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.992 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.992 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.992 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.992 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.992 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.992 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.992 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.992 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7887524 kB' 'MemAvailable: 9488856 kB' 'Buffers: 2436 kB' 'Cached: 1814768 kB' 'SwapCached: 0 kB' 'Active: 459628 kB' 'Inactive: 1474748 kB' 'Active(anon): 127644 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474748 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 118796 kB' 'Mapped: 47824 kB' 'Shmem: 10472 kB' 'KReclaimable: 63076 kB' 'Slab: 135384 kB' 'SReclaimable: 63076 kB' 'SUnreclaim: 72308 kB' 'KernelStack: 6288 kB' 'PageTables: 3708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.995 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7887524 kB' 'MemAvailable: 9488856 kB' 'Buffers: 2436 kB' 'Cached: 1814768 kB' 'SwapCached: 0 kB' 'Active: 459796 kB' 'Inactive: 1474748 kB' 'Active(anon): 127812 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474748 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 118936 kB' 'Mapped: 47824 kB' 'Shmem: 10472 kB' 'KReclaimable: 63076 kB' 'Slab: 135384 kB' 'SReclaimable: 63076 kB' 'SUnreclaim: 72308 kB' 'KernelStack: 6272 kB' 'PageTables: 3660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.997 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.997 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.997 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.997 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.997 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.997 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.997 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.997 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.997 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.258 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:58.259 nr_hugepages=1024 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:58.259 resv_hugepages=0 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:58.259 surplus_hugepages=0 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:58.259 anon_hugepages=0 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7887524 kB' 'MemAvailable: 9488856 kB' 'Buffers: 2436 kB' 'Cached: 1814768 kB' 'SwapCached: 0 kB' 'Active: 459356 kB' 'Inactive: 1474748 kB' 'Active(anon): 127372 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474748 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 118540 kB' 'Mapped: 47824 kB' 'Shmem: 10472 kB' 'KReclaimable: 63076 kB' 'Slab: 135384 kB' 'SReclaimable: 63076 kB' 'SUnreclaim: 72308 kB' 'KernelStack: 6288 kB' 'PageTables: 3708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.259 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.260 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7887524 kB' 'MemUsed: 4354456 kB' 'SwapCached: 0 kB' 'Active: 459352 kB' 'Inactive: 1474748 kB' 'Active(anon): 127368 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474748 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 1817204 kB' 'Mapped: 47824 kB' 'AnonPages: 118796 kB' 'Shmem: 10472 kB' 'KernelStack: 6288 kB' 'PageTables: 3708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63076 kB' 'Slab: 135384 kB' 'SReclaimable: 63076 kB' 'SUnreclaim: 72308 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.261 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.262 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.263 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.263 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.263 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.263 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.263 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.263 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.263 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:58.263 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:58.263 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:58.263 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:58.263 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:58.263 node0=1024 expecting 1024 00:03:58.263 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:58.263 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:58.263 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:58.263 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:58.263 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:58.263 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.263 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:58.521 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:58.784 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:58.784 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:58.784 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:58.784 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:58.784 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:58.784 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7884584 kB' 'MemAvailable: 9485916 kB' 'Buffers: 2436 kB' 'Cached: 1814768 kB' 'SwapCached: 0 kB' 'Active: 459880 kB' 'Inactive: 1474748 kB' 'Active(anon): 127896 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474748 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 118996 kB' 'Mapped: 47900 kB' 'Shmem: 10472 kB' 'KReclaimable: 63076 kB' 'Slab: 135356 kB' 'SReclaimable: 63076 kB' 'SUnreclaim: 72280 kB' 'KernelStack: 6280 kB' 'PageTables: 3568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.785 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7884584 kB' 'MemAvailable: 9485916 kB' 'Buffers: 2436 kB' 'Cached: 1814768 kB' 'SwapCached: 0 kB' 'Active: 459388 kB' 'Inactive: 1474748 kB' 'Active(anon): 127404 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474748 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 118548 kB' 'Mapped: 47824 kB' 'Shmem: 10472 kB' 'KReclaimable: 63076 kB' 'Slab: 135360 kB' 'SReclaimable: 63076 kB' 'SUnreclaim: 72284 kB' 'KernelStack: 6288 kB' 'PageTables: 3712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.786 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.787 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.788 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7884584 kB' 'MemAvailable: 9485916 kB' 'Buffers: 2436 kB' 'Cached: 1814768 kB' 'SwapCached: 0 kB' 'Active: 459656 kB' 'Inactive: 1474748 kB' 'Active(anon): 127672 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474748 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 118816 kB' 'Mapped: 47824 kB' 'Shmem: 10472 kB' 'KReclaimable: 63076 kB' 'Slab: 135360 kB' 'SReclaimable: 63076 kB' 'SUnreclaim: 72284 kB' 'KernelStack: 6288 kB' 'PageTables: 3712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.789 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.790 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:58.791 nr_hugepages=1024 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:58.791 resv_hugepages=0 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:58.791 surplus_hugepages=0 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:58.791 anon_hugepages=0 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7884584 kB' 'MemAvailable: 9485916 kB' 'Buffers: 2436 kB' 'Cached: 1814768 kB' 'SwapCached: 0 kB' 'Active: 459388 kB' 'Inactive: 1474748 kB' 'Active(anon): 127404 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474748 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 118548 kB' 'Mapped: 47824 kB' 'Shmem: 10472 kB' 'KReclaimable: 63076 kB' 'Slab: 135360 kB' 'SReclaimable: 63076 kB' 'SUnreclaim: 72284 kB' 'KernelStack: 6288 kB' 'PageTables: 3712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.791 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.792 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7884584 kB' 'MemUsed: 4357396 kB' 'SwapCached: 0 kB' 'Active: 459396 kB' 'Inactive: 1474748 kB' 'Active(anon): 127412 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1474748 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'FilePages: 1817204 kB' 'Mapped: 47824 kB' 'AnonPages: 118812 kB' 'Shmem: 10472 kB' 'KernelStack: 6288 kB' 'PageTables: 3712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63076 kB' 'Slab: 135360 kB' 'SReclaimable: 63076 kB' 'SUnreclaim: 72284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.793 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:58.794 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:58.794 node0=1024 expecting 1024 00:03:58.795 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:58.795 12:59:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:58.795 00:03:58.795 real 0m1.383s 00:03:58.795 user 0m0.663s 00:03:58.795 sys 0m0.813s 00:03:58.795 12:59:50 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:58.795 12:59:50 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:58.795 ************************************ 00:03:58.795 END TEST no_shrink_alloc 00:03:58.795 ************************************ 00:03:59.054 12:59:50 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:59.054 12:59:50 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:59.054 12:59:50 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:59.054 12:59:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.054 12:59:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:59.054 12:59:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.054 12:59:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:59.054 12:59:50 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:59.054 12:59:50 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:59.054 00:03:59.054 real 0m5.986s 00:03:59.054 user 0m2.755s 00:03:59.054 sys 0m3.372s 00:03:59.054 12:59:50 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:59.054 12:59:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:59.054 ************************************ 00:03:59.054 END TEST hugepages 00:03:59.054 ************************************ 00:03:59.054 12:59:51 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:59.054 12:59:51 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:59.054 12:59:51 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:59.054 12:59:51 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:59.054 ************************************ 00:03:59.054 START TEST driver 00:03:59.054 ************************************ 00:03:59.054 12:59:51 setup.sh.driver -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:59.054 * Looking for test storage... 00:03:59.054 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:59.054 12:59:51 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:59.054 12:59:51 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:59.054 12:59:51 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:05.615 12:59:57 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:05.615 12:59:57 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:05.615 12:59:57 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.615 12:59:57 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:05.615 ************************************ 00:04:05.615 START TEST guess_driver 00:04:05.615 ************************************ 00:04:05.615 12:59:57 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:04:05.615 12:59:57 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:05.615 12:59:57 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:05.615 12:59:57 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:05.615 12:59:57 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:05.615 12:59:57 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:05.615 12:59:57 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:05.615 12:59:57 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:05.615 12:59:57 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:05.615 12:59:57 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:05.615 12:59:57 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:05.615 12:59:57 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:05.615 12:59:57 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:05.615 12:59:57 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:05.615 12:59:57 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:05.615 12:59:57 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:05.615 12:59:57 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:05.615 12:59:57 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:05.615 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:05.615 12:59:57 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:05.615 12:59:57 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:05.615 12:59:57 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:05.615 Looking for driver=uio_pci_generic 00:04:05.616 12:59:57 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:05.616 12:59:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.616 12:59:57 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:05.616 12:59:57 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.616 12:59:57 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:05.616 12:59:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:05.616 12:59:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:05.616 12:59:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.183 12:59:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.183 12:59:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:06.184 12:59:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.184 12:59:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.184 12:59:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:06.184 12:59:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.184 12:59:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.184 12:59:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:06.184 12:59:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.184 12:59:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.184 12:59:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:06.184 12:59:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.184 12:59:58 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:06.184 12:59:58 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:06.184 12:59:58 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:06.184 12:59:58 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:12.744 00:04:12.745 real 0m7.123s 00:04:12.745 user 0m0.789s 00:04:12.745 sys 0m1.411s 00:04:12.745 13:00:04 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:12.745 13:00:04 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:12.745 ************************************ 00:04:12.745 END TEST guess_driver 00:04:12.745 ************************************ 00:04:12.745 00:04:12.745 real 0m13.147s 00:04:12.745 user 0m1.119s 00:04:12.745 sys 0m2.208s 00:04:12.745 13:00:04 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:12.745 ************************************ 00:04:12.745 END TEST driver 00:04:12.745 ************************************ 00:04:12.745 13:00:04 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:12.745 13:00:04 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:12.745 13:00:04 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:12.745 13:00:04 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:12.745 13:00:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:12.745 ************************************ 00:04:12.745 START TEST devices 00:04:12.745 ************************************ 00:04:12.745 13:00:04 setup.sh.devices -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:12.745 * Looking for test storage... 00:04:12.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:12.745 13:00:04 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:12.745 13:00:04 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:12.745 13:00:04 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:12.745 13:00:04 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:13.312 13:00:05 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:13.312 13:00:05 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:13.312 13:00:05 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:13.312 13:00:05 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:13.312 13:00:05 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:13.312 13:00:05 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:13.312 13:00:05 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:13.312 13:00:05 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:13.312 13:00:05 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:13.312 13:00:05 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:13.312 13:00:05 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:13.312 13:00:05 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:13.312 13:00:05 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:13.312 13:00:05 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:13.312 13:00:05 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:13.312 13:00:05 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:04:13.312 13:00:05 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:04:13.312 13:00:05 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:13.312 13:00:05 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:13.312 13:00:05 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:13.312 13:00:05 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:04:13.312 13:00:05 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:04:13.312 13:00:05 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:13.312 13:00:05 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:13.312 13:00:05 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:13.312 13:00:05 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:04:13.312 13:00:05 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:04:13.312 13:00:05 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:13.312 13:00:05 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:13.312 13:00:05 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:13.312 13:00:05 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:04:13.312 13:00:05 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:04:13.312 13:00:05 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:13.312 13:00:05 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:13.312 13:00:05 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:13.312 13:00:05 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:04:13.312 13:00:05 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:04:13.312 13:00:05 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:13.312 13:00:05 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:13.312 13:00:05 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:13.312 13:00:05 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:13.312 13:00:05 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:13.312 13:00:05 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:13.312 13:00:05 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:13.312 13:00:05 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:13.312 13:00:05 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:13.312 13:00:05 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:13.312 13:00:05 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:13.312 13:00:05 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:13.312 13:00:05 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:13.312 13:00:05 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:13.312 13:00:05 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:13.312 No valid GPT data, bailing 00:04:13.312 13:00:05 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:13.312 13:00:05 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:13.312 13:00:05 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:13.312 13:00:05 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:13.312 13:00:05 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:13.312 13:00:05 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:13.312 13:00:05 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:13.312 13:00:05 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:13.312 13:00:05 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:13.312 13:00:05 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:13.312 13:00:05 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:13.312 13:00:05 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:13.312 13:00:05 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:13.312 13:00:05 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:13.312 13:00:05 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:13.312 13:00:05 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:13.312 13:00:05 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:13.312 13:00:05 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:13.571 No valid GPT data, bailing 00:04:13.571 13:00:05 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:13.571 13:00:05 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:13.571 13:00:05 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:13.571 13:00:05 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:13.571 13:00:05 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:13.571 13:00:05 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:13.571 13:00:05 setup.sh.devices -- setup/common.sh@80 -- # echo 6343335936 00:04:13.571 13:00:05 setup.sh.devices -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:04:13.571 13:00:05 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:13.571 13:00:05 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:13.571 13:00:05 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:13.571 13:00:05 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:04:13.571 13:00:05 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:13.571 13:00:05 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:04:13.571 13:00:05 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:04:13.571 13:00:05 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:04:13.571 13:00:05 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:04:13.571 13:00:05 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:04:13.571 No valid GPT data, bailing 00:04:13.571 13:00:05 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:13.571 13:00:05 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:13.571 13:00:05 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:13.571 13:00:05 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:04:13.571 13:00:05 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:04:13.571 13:00:05 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:04:13.571 13:00:05 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:13.571 13:00:05 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:13.571 13:00:05 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:13.571 13:00:05 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:04:13.571 13:00:05 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:13.571 13:00:05 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n2 00:04:13.571 13:00:05 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:13.571 13:00:05 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:04:13.571 13:00:05 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:04:13.571 13:00:05 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n2 00:04:13.571 13:00:05 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n2 pt 00:04:13.571 13:00:05 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n2 00:04:13.571 No valid GPT data, bailing 00:04:13.571 13:00:05 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:13.571 13:00:05 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:13.571 13:00:05 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:13.571 13:00:05 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n2 00:04:13.571 13:00:05 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n2 00:04:13.571 13:00:05 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n2 ]] 00:04:13.571 13:00:05 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:13.571 13:00:05 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:13.571 13:00:05 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:13.571 13:00:05 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:04:13.571 13:00:05 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:13.571 13:00:05 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n3 00:04:13.571 13:00:05 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:13.571 13:00:05 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:04:13.571 13:00:05 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:04:13.571 13:00:05 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n3 00:04:13.571 13:00:05 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n3 pt 00:04:13.571 13:00:05 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n3 00:04:13.840 No valid GPT data, bailing 00:04:13.841 13:00:05 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:13.841 13:00:05 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:13.841 13:00:05 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:13.841 13:00:05 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n3 00:04:13.841 13:00:05 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n3 00:04:13.841 13:00:05 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n3 ]] 00:04:13.841 13:00:05 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:13.841 13:00:05 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:13.841 13:00:05 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:13.841 13:00:05 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:04:13.841 13:00:05 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:13.841 13:00:05 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:04:13.841 13:00:05 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3 00:04:13.841 13:00:05 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:13.0 00:04:13.841 13:00:05 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:04:13.841 13:00:05 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:04:13.841 13:00:05 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:04:13.841 13:00:05 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:04:13.841 No valid GPT data, bailing 00:04:13.841 13:00:05 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:13.841 13:00:05 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:13.841 13:00:05 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:13.841 13:00:05 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:04:13.841 13:00:05 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme3n1 00:04:13.841 13:00:05 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:04:13.841 13:00:05 setup.sh.devices -- setup/common.sh@80 -- # echo 1073741824 00:04:13.841 13:00:05 setup.sh.devices -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:04:13.841 13:00:05 setup.sh.devices -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:04:13.841 13:00:05 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:13.842 13:00:05 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:13.842 13:00:05 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:13.842 13:00:05 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:13.842 13:00:05 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:13.842 ************************************ 00:04:13.842 START TEST nvme_mount 00:04:13.842 ************************************ 00:04:13.842 13:00:05 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:04:13.842 13:00:05 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:13.842 13:00:05 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:13.842 13:00:05 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:13.842 13:00:05 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:13.842 13:00:05 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:13.842 13:00:05 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:13.842 13:00:05 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:13.842 13:00:05 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:13.842 13:00:05 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:13.842 13:00:05 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:13.842 13:00:05 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:13.842 13:00:05 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:13.842 13:00:05 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:13.843 13:00:05 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:13.843 13:00:05 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:13.843 13:00:05 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:13.843 13:00:05 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:13.843 13:00:05 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:13.843 13:00:05 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:14.788 Creating new GPT entries in memory. 00:04:14.788 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:14.788 other utilities. 00:04:14.788 13:00:06 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:14.788 13:00:06 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:14.788 13:00:06 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:14.788 13:00:06 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:14.788 13:00:06 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:16.163 Creating new GPT entries in memory. 00:04:16.163 The operation has completed successfully. 00:04:16.163 13:00:07 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:16.163 13:00:07 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:16.163 13:00:07 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 59482 00:04:16.163 13:00:07 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:16.163 13:00:07 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:16.163 13:00:07 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:16.163 13:00:07 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:16.163 13:00:07 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:16.163 13:00:07 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:16.163 13:00:07 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:16.163 13:00:07 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:16.163 13:00:07 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:16.163 13:00:07 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:16.164 13:00:07 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:16.164 13:00:07 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:16.164 13:00:07 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:16.164 13:00:07 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:16.164 13:00:07 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:16.164 13:00:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.164 13:00:07 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:16.164 13:00:07 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:16.164 13:00:07 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.164 13:00:07 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:16.164 13:00:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:16.164 13:00:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:16.164 13:00:08 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:16.164 13:00:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.164 13:00:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:16.164 13:00:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.164 13:00:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:16.164 13:00:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.421 13:00:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:16.421 13:00:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.421 13:00:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:16.421 13:00:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.679 13:00:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:16.679 13:00:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.938 13:00:08 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:16.938 13:00:08 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:16.938 13:00:08 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:16.938 13:00:08 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:16.938 13:00:08 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:16.938 13:00:08 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:16.938 13:00:08 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:16.938 13:00:08 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:16.938 13:00:08 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:16.938 13:00:08 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:16.938 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:16.938 13:00:08 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:16.938 13:00:08 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:17.196 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:17.196 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:17.196 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:17.196 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:17.196 13:00:09 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:17.196 13:00:09 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:17.196 13:00:09 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:17.196 13:00:09 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:17.196 13:00:09 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:17.196 13:00:09 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:17.196 13:00:09 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:17.196 13:00:09 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:17.196 13:00:09 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:17.196 13:00:09 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:17.196 13:00:09 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:17.196 13:00:09 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:17.196 13:00:09 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:17.196 13:00:09 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:17.196 13:00:09 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:17.196 13:00:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.196 13:00:09 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:17.196 13:00:09 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:17.196 13:00:09 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.196 13:00:09 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:17.453 13:00:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:17.453 13:00:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:17.453 13:00:09 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:17.453 13:00:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.453 13:00:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:17.453 13:00:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.453 13:00:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:17.454 13:00:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.713 13:00:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:17.713 13:00:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.713 13:00:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:17.713 13:00:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.973 13:00:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:17.973 13:00:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.973 13:00:10 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:17.973 13:00:10 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:17.973 13:00:10 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:17.973 13:00:10 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:17.973 13:00:10 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:17.973 13:00:10 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:18.232 13:00:10 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:18.232 13:00:10 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:18.232 13:00:10 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:18.232 13:00:10 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:18.232 13:00:10 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:18.232 13:00:10 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:18.232 13:00:10 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:18.232 13:00:10 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:18.232 13:00:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.232 13:00:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:18.232 13:00:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:18.232 13:00:10 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.232 13:00:10 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:18.500 13:00:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:18.500 13:00:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:18.500 13:00:10 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:18.500 13:00:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.500 13:00:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:18.500 13:00:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.500 13:00:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:18.500 13:00:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.500 13:00:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:18.500 13:00:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.500 13:00:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:18.500 13:00:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.773 13:00:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:18.773 13:00:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.032 13:00:11 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:19.032 13:00:11 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:19.032 13:00:11 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:19.032 13:00:11 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:19.032 13:00:11 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:19.032 13:00:11 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:19.032 13:00:11 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:19.032 13:00:11 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:19.032 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:19.032 00:04:19.032 real 0m5.304s 00:04:19.032 user 0m1.427s 00:04:19.032 sys 0m1.559s 00:04:19.032 13:00:11 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:19.032 13:00:11 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:19.032 ************************************ 00:04:19.032 END TEST nvme_mount 00:04:19.032 ************************************ 00:04:19.032 13:00:11 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:19.032 13:00:11 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:19.032 13:00:11 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:19.032 13:00:11 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:19.290 ************************************ 00:04:19.290 START TEST dm_mount 00:04:19.290 ************************************ 00:04:19.290 13:00:11 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:04:19.290 13:00:11 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:19.290 13:00:11 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:19.290 13:00:11 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:19.290 13:00:11 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:19.290 13:00:11 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:19.290 13:00:11 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:19.290 13:00:11 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:19.290 13:00:11 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:19.290 13:00:11 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:19.290 13:00:11 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:19.290 13:00:11 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:19.290 13:00:11 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:19.290 13:00:11 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:19.290 13:00:11 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:19.290 13:00:11 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:19.290 13:00:11 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:19.290 13:00:11 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:19.290 13:00:11 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:19.290 13:00:11 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:19.290 13:00:11 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:19.290 13:00:11 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:20.223 Creating new GPT entries in memory. 00:04:20.223 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:20.223 other utilities. 00:04:20.223 13:00:12 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:20.223 13:00:12 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:20.223 13:00:12 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:20.223 13:00:12 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:20.223 13:00:12 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:21.157 Creating new GPT entries in memory. 00:04:21.157 The operation has completed successfully. 00:04:21.157 13:00:13 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:21.157 13:00:13 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:21.157 13:00:13 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:21.157 13:00:13 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:21.157 13:00:13 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:22.531 The operation has completed successfully. 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 60110 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:22.531 13:00:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.789 13:00:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:22.789 13:00:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.789 13:00:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:22.789 13:00:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.789 13:00:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:22.789 13:00:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.047 13:00:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:23.047 13:00:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.305 13:00:15 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:23.305 13:00:15 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:23.305 13:00:15 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:23.305 13:00:15 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:23.305 13:00:15 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:23.305 13:00:15 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:23.305 13:00:15 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:23.305 13:00:15 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:23.305 13:00:15 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:23.305 13:00:15 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:23.305 13:00:15 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:23.305 13:00:15 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:23.305 13:00:15 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:23.305 13:00:15 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:23.305 13:00:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.305 13:00:15 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:23.305 13:00:15 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:23.305 13:00:15 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.305 13:00:15 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:23.563 13:00:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:23.563 13:00:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:23.563 13:00:15 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:23.563 13:00:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.563 13:00:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:23.563 13:00:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.821 13:00:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:23.821 13:00:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.821 13:00:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:23.821 13:00:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.821 13:00:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:23.822 13:00:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.079 13:00:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:24.079 13:00:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.337 13:00:16 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:24.337 13:00:16 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:24.337 13:00:16 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:24.337 13:00:16 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:24.337 13:00:16 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:24.337 13:00:16 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:24.337 13:00:16 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:24.337 13:00:16 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:24.337 13:00:16 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:24.337 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:24.337 13:00:16 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:24.337 13:00:16 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:24.337 00:04:24.337 real 0m5.188s 00:04:24.337 user 0m0.996s 00:04:24.337 sys 0m1.109s 00:04:24.337 13:00:16 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.337 13:00:16 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:24.337 ************************************ 00:04:24.337 END TEST dm_mount 00:04:24.337 ************************************ 00:04:24.337 13:00:16 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:24.337 13:00:16 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:24.337 13:00:16 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:24.337 13:00:16 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:24.337 13:00:16 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:24.337 13:00:16 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:24.337 13:00:16 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:24.596 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:24.596 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:24.596 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:24.596 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:24.596 13:00:16 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:24.596 13:00:16 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:24.596 13:00:16 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:24.596 13:00:16 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:24.596 13:00:16 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:24.596 13:00:16 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:24.596 13:00:16 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:24.596 00:04:24.596 real 0m12.519s 00:04:24.596 user 0m3.359s 00:04:24.596 sys 0m3.436s 00:04:24.596 13:00:16 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.596 13:00:16 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:24.596 ************************************ 00:04:24.596 END TEST devices 00:04:24.596 ************************************ 00:04:24.853 00:04:24.853 real 0m43.821s 00:04:24.853 user 0m10.313s 00:04:24.853 sys 0m13.090s 00:04:24.853 13:00:16 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.853 ************************************ 00:04:24.853 END TEST setup.sh 00:04:24.853 13:00:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:24.853 ************************************ 00:04:24.853 13:00:16 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:25.419 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:25.677 Hugepages 00:04:25.677 node hugesize free / total 00:04:25.677 node0 1048576kB 0 / 0 00:04:25.677 node0 2048kB 2048 / 2048 00:04:25.677 00:04:25.677 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:25.935 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:25.935 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:25.935 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:25.935 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:26.213 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:26.213 13:00:18 -- spdk/autotest.sh@130 -- # uname -s 00:04:26.214 13:00:18 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:26.214 13:00:18 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:26.214 13:00:18 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:26.788 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:27.353 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:27.353 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:27.353 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:27.353 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:27.353 13:00:19 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:28.289 13:00:20 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:28.289 13:00:20 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:28.289 13:00:20 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:28.289 13:00:20 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:28.289 13:00:20 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:28.289 13:00:20 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:28.289 13:00:20 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:28.289 13:00:20 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:28.289 13:00:20 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:28.547 13:00:20 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:04:28.547 13:00:20 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:28.547 13:00:20 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:28.817 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:29.091 Waiting for block devices as requested 00:04:29.091 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:29.091 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:29.350 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:29.350 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:34.618 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:34.618 13:00:26 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:34.618 13:00:26 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:34.618 13:00:26 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:34.618 13:00:26 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:34.618 13:00:26 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:34.618 13:00:26 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:34.618 13:00:26 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:34.618 13:00:26 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:34.618 13:00:26 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:34.618 13:00:26 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:34.618 13:00:26 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:34.618 13:00:26 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:34.618 13:00:26 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:34.618 13:00:26 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:34.618 13:00:26 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:34.618 13:00:26 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:34.618 13:00:26 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:04:34.618 13:00:26 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:34.618 13:00:26 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:34.618 13:00:26 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:34.618 13:00:26 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:34.618 13:00:26 -- common/autotest_common.sh@1557 -- # continue 00:04:34.618 13:00:26 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:34.618 13:00:26 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:34.618 13:00:26 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:34.618 13:00:26 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:04:34.618 13:00:26 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:34.618 13:00:26 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:34.618 13:00:26 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:34.618 13:00:26 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:34.618 13:00:26 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:34.618 13:00:26 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:34.618 13:00:26 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:34.618 13:00:26 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:34.618 13:00:26 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:34.618 13:00:26 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:34.618 13:00:26 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:34.618 13:00:26 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:34.618 13:00:26 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:34.618 13:00:26 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:34.618 13:00:26 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:34.618 13:00:26 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:34.618 13:00:26 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:34.618 13:00:26 -- common/autotest_common.sh@1557 -- # continue 00:04:34.618 13:00:26 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:34.618 13:00:26 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:34.618 13:00:26 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:34.618 13:00:26 -- common/autotest_common.sh@1502 -- # grep 0000:00:12.0/nvme/nvme 00:04:34.618 13:00:26 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:34.618 13:00:26 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:34.618 13:00:26 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:34.618 13:00:26 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme2 00:04:34.618 13:00:26 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme2 00:04:34.618 13:00:26 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme2 ]] 00:04:34.618 13:00:26 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme2 00:04:34.618 13:00:26 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:34.618 13:00:26 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:34.618 13:00:26 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:34.618 13:00:26 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:34.618 13:00:26 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:34.618 13:00:26 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:34.618 13:00:26 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme2 00:04:34.618 13:00:26 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:34.618 13:00:26 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:34.618 13:00:26 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:34.619 13:00:26 -- common/autotest_common.sh@1557 -- # continue 00:04:34.619 13:00:26 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:34.619 13:00:26 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:34.619 13:00:26 -- common/autotest_common.sh@1502 -- # grep 0000:00:13.0/nvme/nvme 00:04:34.619 13:00:26 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:34.619 13:00:26 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:34.619 13:00:26 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:34.619 13:00:26 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:34.619 13:00:26 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme3 00:04:34.619 13:00:26 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme3 00:04:34.619 13:00:26 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme3 ]] 00:04:34.619 13:00:26 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme3 00:04:34.619 13:00:26 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:34.619 13:00:26 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:34.619 13:00:26 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:34.619 13:00:26 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:34.619 13:00:26 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:34.619 13:00:26 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme3 00:04:34.619 13:00:26 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:34.619 13:00:26 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:34.619 13:00:26 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:34.619 13:00:26 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:34.619 13:00:26 -- common/autotest_common.sh@1557 -- # continue 00:04:34.619 13:00:26 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:34.619 13:00:26 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:34.619 13:00:26 -- common/autotest_common.sh@10 -- # set +x 00:04:34.619 13:00:26 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:34.619 13:00:26 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:34.619 13:00:26 -- common/autotest_common.sh@10 -- # set +x 00:04:34.619 13:00:26 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:35.195 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:35.763 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:35.763 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:35.763 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:35.763 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:35.763 13:00:27 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:35.763 13:00:27 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:35.763 13:00:27 -- common/autotest_common.sh@10 -- # set +x 00:04:36.021 13:00:27 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:36.021 13:00:27 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:36.021 13:00:27 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:36.021 13:00:27 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:36.021 13:00:27 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:36.021 13:00:27 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:36.021 13:00:27 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:36.021 13:00:27 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:36.022 13:00:27 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:36.022 13:00:27 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:36.022 13:00:27 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:36.022 13:00:28 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:04:36.022 13:00:28 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:36.022 13:00:28 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:36.022 13:00:28 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:36.022 13:00:28 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:36.022 13:00:28 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:36.022 13:00:28 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:36.022 13:00:28 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:36.022 13:00:28 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:36.022 13:00:28 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:36.022 13:00:28 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:36.022 13:00:28 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:36.022 13:00:28 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:36.022 13:00:28 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:36.022 13:00:28 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:36.022 13:00:28 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:36.022 13:00:28 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:36.022 13:00:28 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:36.022 13:00:28 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:36.022 13:00:28 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:36.022 13:00:28 -- common/autotest_common.sh@1593 -- # return 0 00:04:36.022 13:00:28 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:36.022 13:00:28 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:36.022 13:00:28 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:36.022 13:00:28 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:36.022 13:00:28 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:36.022 13:00:28 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:36.022 13:00:28 -- common/autotest_common.sh@10 -- # set +x 00:04:36.022 13:00:28 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:36.022 13:00:28 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:36.022 13:00:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.022 13:00:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.022 13:00:28 -- common/autotest_common.sh@10 -- # set +x 00:04:36.022 ************************************ 00:04:36.022 START TEST env 00:04:36.022 ************************************ 00:04:36.022 13:00:28 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:36.022 * Looking for test storage... 00:04:36.022 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:36.022 13:00:28 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:36.022 13:00:28 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.022 13:00:28 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.022 13:00:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.022 ************************************ 00:04:36.022 START TEST env_memory 00:04:36.022 ************************************ 00:04:36.022 13:00:28 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:36.022 00:04:36.022 00:04:36.022 CUnit - A unit testing framework for C - Version 2.1-3 00:04:36.022 http://cunit.sourceforge.net/ 00:04:36.022 00:04:36.022 00:04:36.022 Suite: memory 00:04:36.279 Test: alloc and free memory map ...[2024-07-25 13:00:28.254714] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:36.279 passed 00:04:36.279 Test: mem map translation ...[2024-07-25 13:00:28.315291] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:36.279 [2024-07-25 13:00:28.315371] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:36.279 [2024-07-25 13:00:28.315467] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:36.279 [2024-07-25 13:00:28.315500] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:36.279 passed 00:04:36.279 Test: mem map registration ...[2024-07-25 13:00:28.405660] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:36.279 [2024-07-25 13:00:28.405749] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:36.279 passed 00:04:36.538 Test: mem map adjacent registrations ...passed 00:04:36.538 00:04:36.538 Run Summary: Type Total Ran Passed Failed Inactive 00:04:36.538 suites 1 1 n/a 0 0 00:04:36.538 tests 4 4 4 0 0 00:04:36.538 asserts 152 152 152 0 n/a 00:04:36.538 00:04:36.538 Elapsed time = 0.291 seconds 00:04:36.538 00:04:36.538 real 0m0.331s 00:04:36.538 user 0m0.301s 00:04:36.538 sys 0m0.024s 00:04:36.538 13:00:28 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.538 ************************************ 00:04:36.538 END TEST env_memory 00:04:36.538 13:00:28 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:36.538 ************************************ 00:04:36.538 13:00:28 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:36.538 13:00:28 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.538 13:00:28 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.538 13:00:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.538 ************************************ 00:04:36.538 START TEST env_vtophys 00:04:36.538 ************************************ 00:04:36.538 13:00:28 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:36.538 EAL: lib.eal log level changed from notice to debug 00:04:36.538 EAL: Detected lcore 0 as core 0 on socket 0 00:04:36.538 EAL: Detected lcore 1 as core 0 on socket 0 00:04:36.538 EAL: Detected lcore 2 as core 0 on socket 0 00:04:36.538 EAL: Detected lcore 3 as core 0 on socket 0 00:04:36.538 EAL: Detected lcore 4 as core 0 on socket 0 00:04:36.538 EAL: Detected lcore 5 as core 0 on socket 0 00:04:36.538 EAL: Detected lcore 6 as core 0 on socket 0 00:04:36.538 EAL: Detected lcore 7 as core 0 on socket 0 00:04:36.538 EAL: Detected lcore 8 as core 0 on socket 0 00:04:36.538 EAL: Detected lcore 9 as core 0 on socket 0 00:04:36.538 EAL: Maximum logical cores by configuration: 128 00:04:36.538 EAL: Detected CPU lcores: 10 00:04:36.538 EAL: Detected NUMA nodes: 1 00:04:36.538 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:36.538 EAL: Detected shared linkage of DPDK 00:04:36.538 EAL: No shared files mode enabled, IPC will be disabled 00:04:36.538 EAL: Selected IOVA mode 'PA' 00:04:36.538 EAL: Probing VFIO support... 00:04:36.538 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:36.538 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:36.538 EAL: Ask a virtual area of 0x2e000 bytes 00:04:36.538 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:36.538 EAL: Setting up physically contiguous memory... 00:04:36.538 EAL: Setting maximum number of open files to 524288 00:04:36.538 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:36.538 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:36.538 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.538 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:36.538 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:36.538 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.538 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:36.538 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:36.538 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.538 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:36.538 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:36.538 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.538 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:36.538 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:36.538 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.538 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:36.538 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:36.538 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.538 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:36.538 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:36.538 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.538 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:36.538 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:36.538 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.538 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:36.538 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:36.538 EAL: Hugepages will be freed exactly as allocated. 00:04:36.538 EAL: No shared files mode enabled, IPC is disabled 00:04:36.538 EAL: No shared files mode enabled, IPC is disabled 00:04:36.796 EAL: TSC frequency is ~2200000 KHz 00:04:36.796 EAL: Main lcore 0 is ready (tid=7f52e4ebba40;cpuset=[0]) 00:04:36.796 EAL: Trying to obtain current memory policy. 00:04:36.796 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.796 EAL: Restoring previous memory policy: 0 00:04:36.796 EAL: request: mp_malloc_sync 00:04:36.796 EAL: No shared files mode enabled, IPC is disabled 00:04:36.796 EAL: Heap on socket 0 was expanded by 2MB 00:04:36.796 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:36.796 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:36.796 EAL: Mem event callback 'spdk:(nil)' registered 00:04:36.796 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:36.796 00:04:36.797 00:04:36.797 CUnit - A unit testing framework for C - Version 2.1-3 00:04:36.797 http://cunit.sourceforge.net/ 00:04:36.797 00:04:36.797 00:04:36.797 Suite: components_suite 00:04:37.054 Test: vtophys_malloc_test ...passed 00:04:37.054 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:37.054 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.054 EAL: Restoring previous memory policy: 4 00:04:37.054 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.054 EAL: request: mp_malloc_sync 00:04:37.054 EAL: No shared files mode enabled, IPC is disabled 00:04:37.054 EAL: Heap on socket 0 was expanded by 4MB 00:04:37.054 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.054 EAL: request: mp_malloc_sync 00:04:37.054 EAL: No shared files mode enabled, IPC is disabled 00:04:37.054 EAL: Heap on socket 0 was shrunk by 4MB 00:04:37.055 EAL: Trying to obtain current memory policy. 00:04:37.055 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.055 EAL: Restoring previous memory policy: 4 00:04:37.055 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.055 EAL: request: mp_malloc_sync 00:04:37.055 EAL: No shared files mode enabled, IPC is disabled 00:04:37.055 EAL: Heap on socket 0 was expanded by 6MB 00:04:37.055 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.055 EAL: request: mp_malloc_sync 00:04:37.055 EAL: No shared files mode enabled, IPC is disabled 00:04:37.055 EAL: Heap on socket 0 was shrunk by 6MB 00:04:37.055 EAL: Trying to obtain current memory policy. 00:04:37.055 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.055 EAL: Restoring previous memory policy: 4 00:04:37.055 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.055 EAL: request: mp_malloc_sync 00:04:37.055 EAL: No shared files mode enabled, IPC is disabled 00:04:37.055 EAL: Heap on socket 0 was expanded by 10MB 00:04:37.055 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.055 EAL: request: mp_malloc_sync 00:04:37.055 EAL: No shared files mode enabled, IPC is disabled 00:04:37.055 EAL: Heap on socket 0 was shrunk by 10MB 00:04:37.055 EAL: Trying to obtain current memory policy. 00:04:37.055 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.055 EAL: Restoring previous memory policy: 4 00:04:37.055 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.055 EAL: request: mp_malloc_sync 00:04:37.055 EAL: No shared files mode enabled, IPC is disabled 00:04:37.055 EAL: Heap on socket 0 was expanded by 18MB 00:04:37.322 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.322 EAL: request: mp_malloc_sync 00:04:37.322 EAL: No shared files mode enabled, IPC is disabled 00:04:37.322 EAL: Heap on socket 0 was shrunk by 18MB 00:04:37.322 EAL: Trying to obtain current memory policy. 00:04:37.322 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.322 EAL: Restoring previous memory policy: 4 00:04:37.322 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.322 EAL: request: mp_malloc_sync 00:04:37.322 EAL: No shared files mode enabled, IPC is disabled 00:04:37.322 EAL: Heap on socket 0 was expanded by 34MB 00:04:37.322 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.322 EAL: request: mp_malloc_sync 00:04:37.322 EAL: No shared files mode enabled, IPC is disabled 00:04:37.322 EAL: Heap on socket 0 was shrunk by 34MB 00:04:37.322 EAL: Trying to obtain current memory policy. 00:04:37.322 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.322 EAL: Restoring previous memory policy: 4 00:04:37.322 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.322 EAL: request: mp_malloc_sync 00:04:37.322 EAL: No shared files mode enabled, IPC is disabled 00:04:37.322 EAL: Heap on socket 0 was expanded by 66MB 00:04:37.322 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.322 EAL: request: mp_malloc_sync 00:04:37.322 EAL: No shared files mode enabled, IPC is disabled 00:04:37.322 EAL: Heap on socket 0 was shrunk by 66MB 00:04:37.595 EAL: Trying to obtain current memory policy. 00:04:37.595 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.595 EAL: Restoring previous memory policy: 4 00:04:37.595 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.595 EAL: request: mp_malloc_sync 00:04:37.595 EAL: No shared files mode enabled, IPC is disabled 00:04:37.595 EAL: Heap on socket 0 was expanded by 130MB 00:04:37.595 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.595 EAL: request: mp_malloc_sync 00:04:37.595 EAL: No shared files mode enabled, IPC is disabled 00:04:37.595 EAL: Heap on socket 0 was shrunk by 130MB 00:04:37.854 EAL: Trying to obtain current memory policy. 00:04:37.854 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.854 EAL: Restoring previous memory policy: 4 00:04:37.854 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.854 EAL: request: mp_malloc_sync 00:04:37.854 EAL: No shared files mode enabled, IPC is disabled 00:04:37.854 EAL: Heap on socket 0 was expanded by 258MB 00:04:38.112 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.112 EAL: request: mp_malloc_sync 00:04:38.112 EAL: No shared files mode enabled, IPC is disabled 00:04:38.112 EAL: Heap on socket 0 was shrunk by 258MB 00:04:38.685 EAL: Trying to obtain current memory policy. 00:04:38.685 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:38.685 EAL: Restoring previous memory policy: 4 00:04:38.685 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.685 EAL: request: mp_malloc_sync 00:04:38.685 EAL: No shared files mode enabled, IPC is disabled 00:04:38.685 EAL: Heap on socket 0 was expanded by 514MB 00:04:39.251 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.251 EAL: request: mp_malloc_sync 00:04:39.251 EAL: No shared files mode enabled, IPC is disabled 00:04:39.251 EAL: Heap on socket 0 was shrunk by 514MB 00:04:39.818 EAL: Trying to obtain current memory policy. 00:04:39.818 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:40.077 EAL: Restoring previous memory policy: 4 00:04:40.077 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.077 EAL: request: mp_malloc_sync 00:04:40.077 EAL: No shared files mode enabled, IPC is disabled 00:04:40.077 EAL: Heap on socket 0 was expanded by 1026MB 00:04:41.461 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.461 EAL: request: mp_malloc_sync 00:04:41.461 EAL: No shared files mode enabled, IPC is disabled 00:04:41.461 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:42.842 passed 00:04:42.842 00:04:42.842 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.842 suites 1 1 n/a 0 0 00:04:42.842 tests 2 2 2 0 0 00:04:42.842 asserts 5362 5362 5362 0 n/a 00:04:42.842 00:04:42.842 Elapsed time = 5.844 seconds 00:04:42.842 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.842 EAL: request: mp_malloc_sync 00:04:42.842 EAL: No shared files mode enabled, IPC is disabled 00:04:42.842 EAL: Heap on socket 0 was shrunk by 2MB 00:04:42.842 EAL: No shared files mode enabled, IPC is disabled 00:04:42.842 EAL: No shared files mode enabled, IPC is disabled 00:04:42.842 EAL: No shared files mode enabled, IPC is disabled 00:04:42.842 00:04:42.842 real 0m6.145s 00:04:42.842 user 0m5.331s 00:04:42.842 sys 0m0.662s 00:04:42.842 ************************************ 00:04:42.842 END TEST env_vtophys 00:04:42.842 ************************************ 00:04:42.842 13:00:34 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.842 13:00:34 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:42.842 13:00:34 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:42.842 13:00:34 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.842 13:00:34 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.842 13:00:34 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.842 ************************************ 00:04:42.842 START TEST env_pci 00:04:42.842 ************************************ 00:04:42.842 13:00:34 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:42.842 00:04:42.842 00:04:42.842 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.842 http://cunit.sourceforge.net/ 00:04:42.842 00:04:42.842 00:04:42.842 Suite: pci 00:04:42.842 Test: pci_hook ...[2024-07-25 13:00:34.783335] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 61934 has claimed it 00:04:42.842 passed 00:04:42.842 00:04:42.842 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.842 suites 1 1 n/a 0 0 00:04:42.842 tests 1 1 1 0 0 00:04:42.842 asserts 25 25 25 0 n/a 00:04:42.842 00:04:42.842 Elapsed time = 0.007 seconds 00:04:42.842 EAL: Cannot find device (10000:00:01.0) 00:04:42.842 EAL: Failed to attach device on primary process 00:04:42.842 ************************************ 00:04:42.842 END TEST env_pci 00:04:42.842 ************************************ 00:04:42.842 00:04:42.842 real 0m0.076s 00:04:42.842 user 0m0.034s 00:04:42.842 sys 0m0.041s 00:04:42.842 13:00:34 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.842 13:00:34 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:42.842 13:00:34 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:42.842 13:00:34 env -- env/env.sh@15 -- # uname 00:04:42.842 13:00:34 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:42.842 13:00:34 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:42.842 13:00:34 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:42.842 13:00:34 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:42.842 13:00:34 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.842 13:00:34 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.842 ************************************ 00:04:42.842 START TEST env_dpdk_post_init 00:04:42.842 ************************************ 00:04:42.842 13:00:34 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:42.842 EAL: Detected CPU lcores: 10 00:04:42.842 EAL: Detected NUMA nodes: 1 00:04:42.842 EAL: Detected shared linkage of DPDK 00:04:42.842 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:42.842 EAL: Selected IOVA mode 'PA' 00:04:43.101 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:43.101 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:43.101 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:43.101 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:43.101 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:43.101 Starting DPDK initialization... 00:04:43.101 Starting SPDK post initialization... 00:04:43.101 SPDK NVMe probe 00:04:43.101 Attaching to 0000:00:10.0 00:04:43.101 Attaching to 0000:00:11.0 00:04:43.101 Attaching to 0000:00:12.0 00:04:43.101 Attaching to 0000:00:13.0 00:04:43.101 Attached to 0000:00:10.0 00:04:43.101 Attached to 0000:00:11.0 00:04:43.101 Attached to 0000:00:13.0 00:04:43.101 Attached to 0000:00:12.0 00:04:43.101 Cleaning up... 00:04:43.101 00:04:43.101 real 0m0.283s 00:04:43.101 user 0m0.111s 00:04:43.101 sys 0m0.075s 00:04:43.101 13:00:35 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.101 ************************************ 00:04:43.101 END TEST env_dpdk_post_init 00:04:43.101 ************************************ 00:04:43.101 13:00:35 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:43.101 13:00:35 env -- env/env.sh@26 -- # uname 00:04:43.101 13:00:35 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:43.101 13:00:35 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:43.101 13:00:35 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.101 13:00:35 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.101 13:00:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:43.101 ************************************ 00:04:43.101 START TEST env_mem_callbacks 00:04:43.101 ************************************ 00:04:43.101 13:00:35 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:43.101 EAL: Detected CPU lcores: 10 00:04:43.101 EAL: Detected NUMA nodes: 1 00:04:43.101 EAL: Detected shared linkage of DPDK 00:04:43.359 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:43.359 EAL: Selected IOVA mode 'PA' 00:04:43.359 00:04:43.359 00:04:43.359 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.359 http://cunit.sourceforge.net/ 00:04:43.359 00:04:43.359 00:04:43.359 Suite: memory 00:04:43.359 Test: test ... 00:04:43.359 register 0x200000200000 2097152 00:04:43.359 malloc 3145728 00:04:43.359 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:43.359 register 0x200000400000 4194304 00:04:43.359 buf 0x2000004fffc0 len 3145728 PASSED 00:04:43.359 malloc 64 00:04:43.359 buf 0x2000004ffec0 len 64 PASSED 00:04:43.359 malloc 4194304 00:04:43.359 register 0x200000800000 6291456 00:04:43.359 buf 0x2000009fffc0 len 4194304 PASSED 00:04:43.359 free 0x2000004fffc0 3145728 00:04:43.359 free 0x2000004ffec0 64 00:04:43.359 unregister 0x200000400000 4194304 PASSED 00:04:43.359 free 0x2000009fffc0 4194304 00:04:43.359 unregister 0x200000800000 6291456 PASSED 00:04:43.359 malloc 8388608 00:04:43.359 register 0x200000400000 10485760 00:04:43.359 buf 0x2000005fffc0 len 8388608 PASSED 00:04:43.359 free 0x2000005fffc0 8388608 00:04:43.359 unregister 0x200000400000 10485760 PASSED 00:04:43.359 passed 00:04:43.359 00:04:43.359 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.359 suites 1 1 n/a 0 0 00:04:43.359 tests 1 1 1 0 0 00:04:43.359 asserts 15 15 15 0 n/a 00:04:43.359 00:04:43.359 Elapsed time = 0.073 seconds 00:04:43.359 ************************************ 00:04:43.359 END TEST env_mem_callbacks 00:04:43.359 ************************************ 00:04:43.359 00:04:43.359 real 0m0.280s 00:04:43.359 user 0m0.108s 00:04:43.359 sys 0m0.068s 00:04:43.359 13:00:35 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.359 13:00:35 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:43.359 00:04:43.359 real 0m7.455s 00:04:43.359 user 0m6.001s 00:04:43.359 sys 0m1.071s 00:04:43.359 13:00:35 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.359 ************************************ 00:04:43.360 13:00:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:43.360 END TEST env 00:04:43.360 ************************************ 00:04:43.619 13:00:35 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:43.619 13:00:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.619 13:00:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.619 13:00:35 -- common/autotest_common.sh@10 -- # set +x 00:04:43.619 ************************************ 00:04:43.619 START TEST rpc 00:04:43.619 ************************************ 00:04:43.619 13:00:35 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:43.619 * Looking for test storage... 00:04:43.619 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:43.619 13:00:35 rpc -- rpc/rpc.sh@65 -- # spdk_pid=62048 00:04:43.619 13:00:35 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:43.619 13:00:35 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:43.619 13:00:35 rpc -- rpc/rpc.sh@67 -- # waitforlisten 62048 00:04:43.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.619 13:00:35 rpc -- common/autotest_common.sh@831 -- # '[' -z 62048 ']' 00:04:43.619 13:00:35 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.619 13:00:35 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:43.619 13:00:35 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.619 13:00:35 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:43.619 13:00:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.878 [2024-07-25 13:00:35.812842] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:43.878 [2024-07-25 13:00:35.813237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62048 ] 00:04:43.878 [2024-07-25 13:00:35.988060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.136 [2024-07-25 13:00:36.214495] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:44.136 [2024-07-25 13:00:36.214572] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 62048' to capture a snapshot of events at runtime. 00:04:44.136 [2024-07-25 13:00:36.214598] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:44.136 [2024-07-25 13:00:36.214614] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:44.136 [2024-07-25 13:00:36.214630] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid62048 for offline analysis/debug. 00:04:44.136 [2024-07-25 13:00:36.214675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.068 13:00:36 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:45.068 13:00:36 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:45.068 13:00:36 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:45.068 13:00:36 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:45.068 13:00:36 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:45.068 13:00:36 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:45.068 13:00:36 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.068 13:00:36 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.068 13:00:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.068 ************************************ 00:04:45.068 START TEST rpc_integrity 00:04:45.068 ************************************ 00:04:45.068 13:00:36 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:45.068 13:00:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:45.068 13:00:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.068 13:00:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.068 13:00:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.068 13:00:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:45.068 13:00:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:45.068 13:00:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:45.068 13:00:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:45.068 13:00:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.068 13:00:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.068 13:00:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.068 13:00:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:45.068 13:00:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:45.068 13:00:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.068 13:00:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.068 13:00:37 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.068 13:00:37 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:45.068 { 00:04:45.068 "name": "Malloc0", 00:04:45.068 "aliases": [ 00:04:45.068 "08259ffb-7a12-46e4-b46a-4b0ddec77d5d" 00:04:45.068 ], 00:04:45.068 "product_name": "Malloc disk", 00:04:45.068 "block_size": 512, 00:04:45.068 "num_blocks": 16384, 00:04:45.068 "uuid": "08259ffb-7a12-46e4-b46a-4b0ddec77d5d", 00:04:45.068 "assigned_rate_limits": { 00:04:45.068 "rw_ios_per_sec": 0, 00:04:45.068 "rw_mbytes_per_sec": 0, 00:04:45.068 "r_mbytes_per_sec": 0, 00:04:45.068 "w_mbytes_per_sec": 0 00:04:45.068 }, 00:04:45.068 "claimed": false, 00:04:45.068 "zoned": false, 00:04:45.068 "supported_io_types": { 00:04:45.068 "read": true, 00:04:45.068 "write": true, 00:04:45.068 "unmap": true, 00:04:45.068 "flush": true, 00:04:45.068 "reset": true, 00:04:45.068 "nvme_admin": false, 00:04:45.068 "nvme_io": false, 00:04:45.068 "nvme_io_md": false, 00:04:45.068 "write_zeroes": true, 00:04:45.068 "zcopy": true, 00:04:45.068 "get_zone_info": false, 00:04:45.068 "zone_management": false, 00:04:45.068 "zone_append": false, 00:04:45.068 "compare": false, 00:04:45.068 "compare_and_write": false, 00:04:45.068 "abort": true, 00:04:45.068 "seek_hole": false, 00:04:45.068 "seek_data": false, 00:04:45.068 "copy": true, 00:04:45.068 "nvme_iov_md": false 00:04:45.068 }, 00:04:45.068 "memory_domains": [ 00:04:45.068 { 00:04:45.068 "dma_device_id": "system", 00:04:45.068 "dma_device_type": 1 00:04:45.068 }, 00:04:45.068 { 00:04:45.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.068 "dma_device_type": 2 00:04:45.068 } 00:04:45.068 ], 00:04:45.068 "driver_specific": {} 00:04:45.068 } 00:04:45.068 ]' 00:04:45.068 13:00:37 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:45.068 13:00:37 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:45.068 13:00:37 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:45.068 13:00:37 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.068 13:00:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.068 [2024-07-25 13:00:37.079343] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:45.068 [2024-07-25 13:00:37.079447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:45.068 [2024-07-25 13:00:37.079532] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:45.068 [2024-07-25 13:00:37.079546] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:45.068 [2024-07-25 13:00:37.082254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:45.068 [2024-07-25 13:00:37.082297] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:45.068 Passthru0 00:04:45.068 13:00:37 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.068 13:00:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:45.068 13:00:37 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.068 13:00:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.068 13:00:37 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.068 13:00:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:45.068 { 00:04:45.069 "name": "Malloc0", 00:04:45.069 "aliases": [ 00:04:45.069 "08259ffb-7a12-46e4-b46a-4b0ddec77d5d" 00:04:45.069 ], 00:04:45.069 "product_name": "Malloc disk", 00:04:45.069 "block_size": 512, 00:04:45.069 "num_blocks": 16384, 00:04:45.069 "uuid": "08259ffb-7a12-46e4-b46a-4b0ddec77d5d", 00:04:45.069 "assigned_rate_limits": { 00:04:45.069 "rw_ios_per_sec": 0, 00:04:45.069 "rw_mbytes_per_sec": 0, 00:04:45.069 "r_mbytes_per_sec": 0, 00:04:45.069 "w_mbytes_per_sec": 0 00:04:45.069 }, 00:04:45.069 "claimed": true, 00:04:45.069 "claim_type": "exclusive_write", 00:04:45.069 "zoned": false, 00:04:45.069 "supported_io_types": { 00:04:45.069 "read": true, 00:04:45.069 "write": true, 00:04:45.069 "unmap": true, 00:04:45.069 "flush": true, 00:04:45.069 "reset": true, 00:04:45.069 "nvme_admin": false, 00:04:45.069 "nvme_io": false, 00:04:45.069 "nvme_io_md": false, 00:04:45.069 "write_zeroes": true, 00:04:45.069 "zcopy": true, 00:04:45.069 "get_zone_info": false, 00:04:45.069 "zone_management": false, 00:04:45.069 "zone_append": false, 00:04:45.069 "compare": false, 00:04:45.069 "compare_and_write": false, 00:04:45.069 "abort": true, 00:04:45.069 "seek_hole": false, 00:04:45.069 "seek_data": false, 00:04:45.069 "copy": true, 00:04:45.069 "nvme_iov_md": false 00:04:45.069 }, 00:04:45.069 "memory_domains": [ 00:04:45.069 { 00:04:45.069 "dma_device_id": "system", 00:04:45.069 "dma_device_type": 1 00:04:45.069 }, 00:04:45.069 { 00:04:45.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.069 "dma_device_type": 2 00:04:45.069 } 00:04:45.069 ], 00:04:45.069 "driver_specific": {} 00:04:45.069 }, 00:04:45.069 { 00:04:45.069 "name": "Passthru0", 00:04:45.069 "aliases": [ 00:04:45.069 "bd6822d5-e5b2-588f-a7cc-e12848a9861e" 00:04:45.069 ], 00:04:45.069 "product_name": "passthru", 00:04:45.069 "block_size": 512, 00:04:45.069 "num_blocks": 16384, 00:04:45.069 "uuid": "bd6822d5-e5b2-588f-a7cc-e12848a9861e", 00:04:45.069 "assigned_rate_limits": { 00:04:45.069 "rw_ios_per_sec": 0, 00:04:45.069 "rw_mbytes_per_sec": 0, 00:04:45.069 "r_mbytes_per_sec": 0, 00:04:45.069 "w_mbytes_per_sec": 0 00:04:45.069 }, 00:04:45.069 "claimed": false, 00:04:45.069 "zoned": false, 00:04:45.069 "supported_io_types": { 00:04:45.069 "read": true, 00:04:45.069 "write": true, 00:04:45.069 "unmap": true, 00:04:45.069 "flush": true, 00:04:45.069 "reset": true, 00:04:45.069 "nvme_admin": false, 00:04:45.069 "nvme_io": false, 00:04:45.069 "nvme_io_md": false, 00:04:45.069 "write_zeroes": true, 00:04:45.069 "zcopy": true, 00:04:45.069 "get_zone_info": false, 00:04:45.069 "zone_management": false, 00:04:45.069 "zone_append": false, 00:04:45.069 "compare": false, 00:04:45.069 "compare_and_write": false, 00:04:45.069 "abort": true, 00:04:45.069 "seek_hole": false, 00:04:45.069 "seek_data": false, 00:04:45.069 "copy": true, 00:04:45.069 "nvme_iov_md": false 00:04:45.069 }, 00:04:45.069 "memory_domains": [ 00:04:45.069 { 00:04:45.069 "dma_device_id": "system", 00:04:45.069 "dma_device_type": 1 00:04:45.069 }, 00:04:45.069 { 00:04:45.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.069 "dma_device_type": 2 00:04:45.069 } 00:04:45.069 ], 00:04:45.069 "driver_specific": { 00:04:45.069 "passthru": { 00:04:45.069 "name": "Passthru0", 00:04:45.069 "base_bdev_name": "Malloc0" 00:04:45.069 } 00:04:45.069 } 00:04:45.069 } 00:04:45.069 ]' 00:04:45.069 13:00:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:45.069 13:00:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:45.069 13:00:37 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:45.069 13:00:37 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.069 13:00:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.069 13:00:37 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.069 13:00:37 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:45.069 13:00:37 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.069 13:00:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.069 13:00:37 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.069 13:00:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:45.069 13:00:37 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.069 13:00:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.069 13:00:37 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.069 13:00:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:45.069 13:00:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:45.327 ************************************ 00:04:45.327 END TEST rpc_integrity 00:04:45.328 ************************************ 00:04:45.328 13:00:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:45.328 00:04:45.328 real 0m0.361s 00:04:45.328 user 0m0.222s 00:04:45.328 sys 0m0.042s 00:04:45.328 13:00:37 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.328 13:00:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.328 13:00:37 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:45.328 13:00:37 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.328 13:00:37 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.328 13:00:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.328 ************************************ 00:04:45.328 START TEST rpc_plugins 00:04:45.328 ************************************ 00:04:45.328 13:00:37 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:45.328 13:00:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:45.328 13:00:37 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.328 13:00:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.328 13:00:37 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.328 13:00:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:45.328 13:00:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:45.328 13:00:37 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.328 13:00:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.328 13:00:37 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.328 13:00:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:45.328 { 00:04:45.328 "name": "Malloc1", 00:04:45.328 "aliases": [ 00:04:45.328 "1c7aa535-f9a6-41d7-a23d-80bfb5155563" 00:04:45.328 ], 00:04:45.328 "product_name": "Malloc disk", 00:04:45.328 "block_size": 4096, 00:04:45.328 "num_blocks": 256, 00:04:45.328 "uuid": "1c7aa535-f9a6-41d7-a23d-80bfb5155563", 00:04:45.328 "assigned_rate_limits": { 00:04:45.328 "rw_ios_per_sec": 0, 00:04:45.328 "rw_mbytes_per_sec": 0, 00:04:45.328 "r_mbytes_per_sec": 0, 00:04:45.328 "w_mbytes_per_sec": 0 00:04:45.328 }, 00:04:45.328 "claimed": false, 00:04:45.328 "zoned": false, 00:04:45.328 "supported_io_types": { 00:04:45.328 "read": true, 00:04:45.328 "write": true, 00:04:45.328 "unmap": true, 00:04:45.328 "flush": true, 00:04:45.328 "reset": true, 00:04:45.328 "nvme_admin": false, 00:04:45.328 "nvme_io": false, 00:04:45.328 "nvme_io_md": false, 00:04:45.328 "write_zeroes": true, 00:04:45.328 "zcopy": true, 00:04:45.328 "get_zone_info": false, 00:04:45.328 "zone_management": false, 00:04:45.328 "zone_append": false, 00:04:45.328 "compare": false, 00:04:45.328 "compare_and_write": false, 00:04:45.328 "abort": true, 00:04:45.328 "seek_hole": false, 00:04:45.328 "seek_data": false, 00:04:45.328 "copy": true, 00:04:45.328 "nvme_iov_md": false 00:04:45.328 }, 00:04:45.328 "memory_domains": [ 00:04:45.328 { 00:04:45.328 "dma_device_id": "system", 00:04:45.328 "dma_device_type": 1 00:04:45.328 }, 00:04:45.328 { 00:04:45.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.328 "dma_device_type": 2 00:04:45.328 } 00:04:45.328 ], 00:04:45.328 "driver_specific": {} 00:04:45.328 } 00:04:45.328 ]' 00:04:45.328 13:00:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:45.328 13:00:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:45.328 13:00:37 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:45.328 13:00:37 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.328 13:00:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.328 13:00:37 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.328 13:00:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:45.328 13:00:37 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.328 13:00:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.328 13:00:37 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.328 13:00:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:45.328 13:00:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:45.328 ************************************ 00:04:45.328 END TEST rpc_plugins 00:04:45.328 ************************************ 00:04:45.328 13:00:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:45.328 00:04:45.328 real 0m0.180s 00:04:45.328 user 0m0.119s 00:04:45.328 sys 0m0.020s 00:04:45.328 13:00:37 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.328 13:00:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.586 13:00:37 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:45.586 13:00:37 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.586 13:00:37 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.586 13:00:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.586 ************************************ 00:04:45.586 START TEST rpc_trace_cmd_test 00:04:45.586 ************************************ 00:04:45.586 13:00:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:45.586 13:00:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:45.586 13:00:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:45.586 13:00:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.586 13:00:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:45.586 13:00:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.586 13:00:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:45.586 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid62048", 00:04:45.587 "tpoint_group_mask": "0x8", 00:04:45.587 "iscsi_conn": { 00:04:45.587 "mask": "0x2", 00:04:45.587 "tpoint_mask": "0x0" 00:04:45.587 }, 00:04:45.587 "scsi": { 00:04:45.587 "mask": "0x4", 00:04:45.587 "tpoint_mask": "0x0" 00:04:45.587 }, 00:04:45.587 "bdev": { 00:04:45.587 "mask": "0x8", 00:04:45.587 "tpoint_mask": "0xffffffffffffffff" 00:04:45.587 }, 00:04:45.587 "nvmf_rdma": { 00:04:45.587 "mask": "0x10", 00:04:45.587 "tpoint_mask": "0x0" 00:04:45.587 }, 00:04:45.587 "nvmf_tcp": { 00:04:45.587 "mask": "0x20", 00:04:45.587 "tpoint_mask": "0x0" 00:04:45.587 }, 00:04:45.587 "ftl": { 00:04:45.587 "mask": "0x40", 00:04:45.587 "tpoint_mask": "0x0" 00:04:45.587 }, 00:04:45.587 "blobfs": { 00:04:45.587 "mask": "0x80", 00:04:45.587 "tpoint_mask": "0x0" 00:04:45.587 }, 00:04:45.587 "dsa": { 00:04:45.587 "mask": "0x200", 00:04:45.587 "tpoint_mask": "0x0" 00:04:45.587 }, 00:04:45.587 "thread": { 00:04:45.587 "mask": "0x400", 00:04:45.587 "tpoint_mask": "0x0" 00:04:45.587 }, 00:04:45.587 "nvme_pcie": { 00:04:45.587 "mask": "0x800", 00:04:45.587 "tpoint_mask": "0x0" 00:04:45.587 }, 00:04:45.587 "iaa": { 00:04:45.587 "mask": "0x1000", 00:04:45.587 "tpoint_mask": "0x0" 00:04:45.587 }, 00:04:45.587 "nvme_tcp": { 00:04:45.587 "mask": "0x2000", 00:04:45.587 "tpoint_mask": "0x0" 00:04:45.587 }, 00:04:45.587 "bdev_nvme": { 00:04:45.587 "mask": "0x4000", 00:04:45.587 "tpoint_mask": "0x0" 00:04:45.587 }, 00:04:45.587 "sock": { 00:04:45.587 "mask": "0x8000", 00:04:45.587 "tpoint_mask": "0x0" 00:04:45.587 } 00:04:45.587 }' 00:04:45.587 13:00:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:45.587 13:00:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:45.587 13:00:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:45.587 13:00:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:45.587 13:00:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:45.587 13:00:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:45.587 13:00:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:45.845 13:00:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:45.845 13:00:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:45.845 ************************************ 00:04:45.846 END TEST rpc_trace_cmd_test 00:04:45.846 ************************************ 00:04:45.846 13:00:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:45.846 00:04:45.846 real 0m0.278s 00:04:45.846 user 0m0.239s 00:04:45.846 sys 0m0.029s 00:04:45.846 13:00:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.846 13:00:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:45.846 13:00:37 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:45.846 13:00:37 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:45.846 13:00:37 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:45.846 13:00:37 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.846 13:00:37 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.846 13:00:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.846 ************************************ 00:04:45.846 START TEST rpc_daemon_integrity 00:04:45.846 ************************************ 00:04:45.846 13:00:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:45.846 13:00:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:45.846 13:00:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.846 13:00:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.846 13:00:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.846 13:00:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:45.846 13:00:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:45.846 13:00:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:45.846 13:00:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:45.846 13:00:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.846 13:00:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.846 13:00:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.846 13:00:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:45.846 13:00:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:45.846 13:00:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.846 13:00:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.846 13:00:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.846 13:00:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:45.846 { 00:04:45.846 "name": "Malloc2", 00:04:45.846 "aliases": [ 00:04:45.846 "3f9a9e57-3f90-43fa-aa98-dbfcfa8c6543" 00:04:45.846 ], 00:04:45.846 "product_name": "Malloc disk", 00:04:45.846 "block_size": 512, 00:04:45.846 "num_blocks": 16384, 00:04:45.846 "uuid": "3f9a9e57-3f90-43fa-aa98-dbfcfa8c6543", 00:04:45.846 "assigned_rate_limits": { 00:04:45.846 "rw_ios_per_sec": 0, 00:04:45.846 "rw_mbytes_per_sec": 0, 00:04:45.846 "r_mbytes_per_sec": 0, 00:04:45.846 "w_mbytes_per_sec": 0 00:04:45.846 }, 00:04:45.846 "claimed": false, 00:04:45.846 "zoned": false, 00:04:45.846 "supported_io_types": { 00:04:45.846 "read": true, 00:04:45.846 "write": true, 00:04:45.846 "unmap": true, 00:04:45.846 "flush": true, 00:04:45.846 "reset": true, 00:04:45.846 "nvme_admin": false, 00:04:45.846 "nvme_io": false, 00:04:45.846 "nvme_io_md": false, 00:04:45.846 "write_zeroes": true, 00:04:45.846 "zcopy": true, 00:04:45.846 "get_zone_info": false, 00:04:45.846 "zone_management": false, 00:04:45.846 "zone_append": false, 00:04:45.846 "compare": false, 00:04:45.846 "compare_and_write": false, 00:04:45.846 "abort": true, 00:04:45.846 "seek_hole": false, 00:04:45.846 "seek_data": false, 00:04:45.846 "copy": true, 00:04:45.846 "nvme_iov_md": false 00:04:45.846 }, 00:04:45.846 "memory_domains": [ 00:04:45.846 { 00:04:45.846 "dma_device_id": "system", 00:04:45.846 "dma_device_type": 1 00:04:45.846 }, 00:04:45.846 { 00:04:45.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.846 "dma_device_type": 2 00:04:45.846 } 00:04:45.846 ], 00:04:45.846 "driver_specific": {} 00:04:45.846 } 00:04:45.846 ]' 00:04:45.846 13:00:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:46.105 13:00:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:46.105 13:00:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:46.105 13:00:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.105 13:00:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.105 [2024-07-25 13:00:38.056575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:46.105 [2024-07-25 13:00:38.056662] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:46.105 [2024-07-25 13:00:38.056695] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:46.105 [2024-07-25 13:00:38.056708] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:46.105 [2024-07-25 13:00:38.059435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:46.105 [2024-07-25 13:00:38.059508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:46.105 Passthru0 00:04:46.105 13:00:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.105 13:00:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:46.105 13:00:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.105 13:00:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.105 13:00:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.105 13:00:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:46.105 { 00:04:46.105 "name": "Malloc2", 00:04:46.105 "aliases": [ 00:04:46.105 "3f9a9e57-3f90-43fa-aa98-dbfcfa8c6543" 00:04:46.105 ], 00:04:46.105 "product_name": "Malloc disk", 00:04:46.105 "block_size": 512, 00:04:46.105 "num_blocks": 16384, 00:04:46.105 "uuid": "3f9a9e57-3f90-43fa-aa98-dbfcfa8c6543", 00:04:46.105 "assigned_rate_limits": { 00:04:46.105 "rw_ios_per_sec": 0, 00:04:46.105 "rw_mbytes_per_sec": 0, 00:04:46.105 "r_mbytes_per_sec": 0, 00:04:46.105 "w_mbytes_per_sec": 0 00:04:46.105 }, 00:04:46.105 "claimed": true, 00:04:46.105 "claim_type": "exclusive_write", 00:04:46.105 "zoned": false, 00:04:46.105 "supported_io_types": { 00:04:46.105 "read": true, 00:04:46.105 "write": true, 00:04:46.105 "unmap": true, 00:04:46.105 "flush": true, 00:04:46.105 "reset": true, 00:04:46.105 "nvme_admin": false, 00:04:46.105 "nvme_io": false, 00:04:46.105 "nvme_io_md": false, 00:04:46.105 "write_zeroes": true, 00:04:46.105 "zcopy": true, 00:04:46.105 "get_zone_info": false, 00:04:46.105 "zone_management": false, 00:04:46.105 "zone_append": false, 00:04:46.105 "compare": false, 00:04:46.105 "compare_and_write": false, 00:04:46.105 "abort": true, 00:04:46.105 "seek_hole": false, 00:04:46.105 "seek_data": false, 00:04:46.105 "copy": true, 00:04:46.105 "nvme_iov_md": false 00:04:46.105 }, 00:04:46.105 "memory_domains": [ 00:04:46.105 { 00:04:46.105 "dma_device_id": "system", 00:04:46.105 "dma_device_type": 1 00:04:46.105 }, 00:04:46.105 { 00:04:46.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.105 "dma_device_type": 2 00:04:46.105 } 00:04:46.105 ], 00:04:46.105 "driver_specific": {} 00:04:46.105 }, 00:04:46.105 { 00:04:46.105 "name": "Passthru0", 00:04:46.105 "aliases": [ 00:04:46.105 "fde1f497-1d14-5333-b46f-bd171993fe6f" 00:04:46.105 ], 00:04:46.105 "product_name": "passthru", 00:04:46.105 "block_size": 512, 00:04:46.105 "num_blocks": 16384, 00:04:46.105 "uuid": "fde1f497-1d14-5333-b46f-bd171993fe6f", 00:04:46.105 "assigned_rate_limits": { 00:04:46.105 "rw_ios_per_sec": 0, 00:04:46.105 "rw_mbytes_per_sec": 0, 00:04:46.105 "r_mbytes_per_sec": 0, 00:04:46.105 "w_mbytes_per_sec": 0 00:04:46.105 }, 00:04:46.105 "claimed": false, 00:04:46.105 "zoned": false, 00:04:46.105 "supported_io_types": { 00:04:46.105 "read": true, 00:04:46.105 "write": true, 00:04:46.105 "unmap": true, 00:04:46.105 "flush": true, 00:04:46.105 "reset": true, 00:04:46.105 "nvme_admin": false, 00:04:46.105 "nvme_io": false, 00:04:46.105 "nvme_io_md": false, 00:04:46.105 "write_zeroes": true, 00:04:46.105 "zcopy": true, 00:04:46.105 "get_zone_info": false, 00:04:46.105 "zone_management": false, 00:04:46.105 "zone_append": false, 00:04:46.105 "compare": false, 00:04:46.105 "compare_and_write": false, 00:04:46.105 "abort": true, 00:04:46.105 "seek_hole": false, 00:04:46.105 "seek_data": false, 00:04:46.105 "copy": true, 00:04:46.105 "nvme_iov_md": false 00:04:46.105 }, 00:04:46.105 "memory_domains": [ 00:04:46.105 { 00:04:46.105 "dma_device_id": "system", 00:04:46.105 "dma_device_type": 1 00:04:46.105 }, 00:04:46.105 { 00:04:46.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.105 "dma_device_type": 2 00:04:46.105 } 00:04:46.105 ], 00:04:46.105 "driver_specific": { 00:04:46.105 "passthru": { 00:04:46.105 "name": "Passthru0", 00:04:46.105 "base_bdev_name": "Malloc2" 00:04:46.105 } 00:04:46.105 } 00:04:46.105 } 00:04:46.105 ]' 00:04:46.105 13:00:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:46.105 13:00:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:46.105 13:00:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:46.105 13:00:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.105 13:00:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.105 13:00:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.105 13:00:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:46.105 13:00:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.105 13:00:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.105 13:00:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.105 13:00:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:46.105 13:00:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.105 13:00:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.105 13:00:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.105 13:00:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:46.105 13:00:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:46.105 ************************************ 00:04:46.105 END TEST rpc_daemon_integrity 00:04:46.105 ************************************ 00:04:46.105 13:00:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:46.105 00:04:46.105 real 0m0.363s 00:04:46.105 user 0m0.229s 00:04:46.105 sys 0m0.040s 00:04:46.105 13:00:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.105 13:00:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.105 13:00:38 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:46.105 13:00:38 rpc -- rpc/rpc.sh@84 -- # killprocess 62048 00:04:46.105 13:00:38 rpc -- common/autotest_common.sh@950 -- # '[' -z 62048 ']' 00:04:46.105 13:00:38 rpc -- common/autotest_common.sh@954 -- # kill -0 62048 00:04:46.377 13:00:38 rpc -- common/autotest_common.sh@955 -- # uname 00:04:46.377 13:00:38 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:46.377 13:00:38 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62048 00:04:46.377 killing process with pid 62048 00:04:46.377 13:00:38 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:46.377 13:00:38 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:46.377 13:00:38 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62048' 00:04:46.377 13:00:38 rpc -- common/autotest_common.sh@969 -- # kill 62048 00:04:46.377 13:00:38 rpc -- common/autotest_common.sh@974 -- # wait 62048 00:04:48.281 ************************************ 00:04:48.281 END TEST rpc 00:04:48.281 ************************************ 00:04:48.281 00:04:48.281 real 0m4.606s 00:04:48.281 user 0m5.356s 00:04:48.281 sys 0m0.749s 00:04:48.281 13:00:40 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:48.281 13:00:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.281 13:00:40 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:48.281 13:00:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:48.281 13:00:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:48.281 13:00:40 -- common/autotest_common.sh@10 -- # set +x 00:04:48.281 ************************************ 00:04:48.281 START TEST skip_rpc 00:04:48.281 ************************************ 00:04:48.281 13:00:40 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:48.281 * Looking for test storage... 00:04:48.281 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:48.281 13:00:40 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:48.281 13:00:40 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:48.281 13:00:40 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:48.281 13:00:40 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:48.281 13:00:40 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:48.281 13:00:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.281 ************************************ 00:04:48.281 START TEST skip_rpc 00:04:48.281 ************************************ 00:04:48.281 13:00:40 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:48.281 13:00:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=62263 00:04:48.282 13:00:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:48.282 13:00:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:48.282 13:00:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:48.282 [2024-07-25 13:00:40.433960] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:48.282 [2024-07-25 13:00:40.434190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62263 ] 00:04:48.540 [2024-07-25 13:00:40.588936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.798 [2024-07-25 13:00:40.736400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.058 13:00:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:54.059 13:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:54.059 13:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:54.059 13:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:54.059 13:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:54.059 13:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:54.059 13:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:54.059 13:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:54.059 13:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.059 13:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.059 13:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:54.059 13:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:54.059 13:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:54.059 13:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:54.059 13:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:54.059 13:00:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:54.059 13:00:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 62263 00:04:54.059 13:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 62263 ']' 00:04:54.059 13:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 62263 00:04:54.059 13:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:54.059 13:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:54.059 13:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62263 00:04:54.059 killing process with pid 62263 00:04:54.059 13:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:54.059 13:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:54.059 13:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62263' 00:04:54.059 13:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 62263 00:04:54.059 13:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 62263 00:04:54.993 00:04:54.993 real 0m6.798s 00:04:54.993 user 0m6.417s 00:04:54.993 sys 0m0.268s 00:04:54.993 13:00:47 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.993 13:00:47 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.993 ************************************ 00:04:54.993 END TEST skip_rpc 00:04:54.993 ************************************ 00:04:55.251 13:00:47 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:55.251 13:00:47 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:55.251 13:00:47 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:55.251 13:00:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.251 ************************************ 00:04:55.251 START TEST skip_rpc_with_json 00:04:55.251 ************************************ 00:04:55.251 13:00:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:55.251 13:00:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:55.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.251 13:00:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=62362 00:04:55.251 13:00:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:55.251 13:00:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:55.251 13:00:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 62362 00:04:55.251 13:00:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 62362 ']' 00:04:55.251 13:00:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.251 13:00:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:55.251 13:00:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.251 13:00:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:55.251 13:00:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:55.251 [2024-07-25 13:00:47.300668] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:55.251 [2024-07-25 13:00:47.300812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62362 ] 00:04:55.509 [2024-07-25 13:00:47.456155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.509 [2024-07-25 13:00:47.609103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.076 13:00:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:56.076 13:00:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:56.076 13:00:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:56.076 13:00:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.076 13:00:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:56.076 [2024-07-25 13:00:48.237568] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:56.076 request: 00:04:56.076 { 00:04:56.076 "trtype": "tcp", 00:04:56.076 "method": "nvmf_get_transports", 00:04:56.076 "req_id": 1 00:04:56.076 } 00:04:56.076 Got JSON-RPC error response 00:04:56.076 response: 00:04:56.076 { 00:04:56.076 "code": -19, 00:04:56.076 "message": "No such device" 00:04:56.076 } 00:04:56.076 13:00:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:56.076 13:00:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:56.076 13:00:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.076 13:00:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:56.076 [2024-07-25 13:00:48.249744] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:56.076 13:00:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.076 13:00:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:56.076 13:00:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.076 13:00:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:56.335 13:00:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.335 13:00:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:56.335 { 00:04:56.335 "subsystems": [ 00:04:56.335 { 00:04:56.335 "subsystem": "keyring", 00:04:56.335 "config": [] 00:04:56.335 }, 00:04:56.335 { 00:04:56.335 "subsystem": "iobuf", 00:04:56.335 "config": [ 00:04:56.335 { 00:04:56.335 "method": "iobuf_set_options", 00:04:56.335 "params": { 00:04:56.335 "small_pool_count": 8192, 00:04:56.335 "large_pool_count": 1024, 00:04:56.335 "small_bufsize": 8192, 00:04:56.335 "large_bufsize": 135168 00:04:56.335 } 00:04:56.335 } 00:04:56.335 ] 00:04:56.335 }, 00:04:56.335 { 00:04:56.335 "subsystem": "sock", 00:04:56.335 "config": [ 00:04:56.335 { 00:04:56.335 "method": "sock_set_default_impl", 00:04:56.335 "params": { 00:04:56.336 "impl_name": "posix" 00:04:56.336 } 00:04:56.336 }, 00:04:56.336 { 00:04:56.336 "method": "sock_impl_set_options", 00:04:56.336 "params": { 00:04:56.336 "impl_name": "ssl", 00:04:56.336 "recv_buf_size": 4096, 00:04:56.336 "send_buf_size": 4096, 00:04:56.336 "enable_recv_pipe": true, 00:04:56.336 "enable_quickack": false, 00:04:56.336 "enable_placement_id": 0, 00:04:56.336 "enable_zerocopy_send_server": true, 00:04:56.336 "enable_zerocopy_send_client": false, 00:04:56.336 "zerocopy_threshold": 0, 00:04:56.336 "tls_version": 0, 00:04:56.336 "enable_ktls": false 00:04:56.336 } 00:04:56.336 }, 00:04:56.336 { 00:04:56.336 "method": "sock_impl_set_options", 00:04:56.336 "params": { 00:04:56.336 "impl_name": "posix", 00:04:56.336 "recv_buf_size": 2097152, 00:04:56.336 "send_buf_size": 2097152, 00:04:56.336 "enable_recv_pipe": true, 00:04:56.336 "enable_quickack": false, 00:04:56.336 "enable_placement_id": 0, 00:04:56.336 "enable_zerocopy_send_server": true, 00:04:56.336 "enable_zerocopy_send_client": false, 00:04:56.336 "zerocopy_threshold": 0, 00:04:56.336 "tls_version": 0, 00:04:56.336 "enable_ktls": false 00:04:56.336 } 00:04:56.336 } 00:04:56.336 ] 00:04:56.336 }, 00:04:56.336 { 00:04:56.336 "subsystem": "vmd", 00:04:56.336 "config": [] 00:04:56.336 }, 00:04:56.336 { 00:04:56.336 "subsystem": "accel", 00:04:56.336 "config": [ 00:04:56.336 { 00:04:56.336 "method": "accel_set_options", 00:04:56.336 "params": { 00:04:56.336 "small_cache_size": 128, 00:04:56.336 "large_cache_size": 16, 00:04:56.336 "task_count": 2048, 00:04:56.336 "sequence_count": 2048, 00:04:56.336 "buf_count": 2048 00:04:56.336 } 00:04:56.336 } 00:04:56.336 ] 00:04:56.336 }, 00:04:56.336 { 00:04:56.336 "subsystem": "bdev", 00:04:56.336 "config": [ 00:04:56.336 { 00:04:56.336 "method": "bdev_set_options", 00:04:56.336 "params": { 00:04:56.336 "bdev_io_pool_size": 65535, 00:04:56.336 "bdev_io_cache_size": 256, 00:04:56.336 "bdev_auto_examine": true, 00:04:56.336 "iobuf_small_cache_size": 128, 00:04:56.336 "iobuf_large_cache_size": 16 00:04:56.336 } 00:04:56.336 }, 00:04:56.336 { 00:04:56.336 "method": "bdev_raid_set_options", 00:04:56.336 "params": { 00:04:56.336 "process_window_size_kb": 1024, 00:04:56.336 "process_max_bandwidth_mb_sec": 0 00:04:56.336 } 00:04:56.336 }, 00:04:56.336 { 00:04:56.336 "method": "bdev_iscsi_set_options", 00:04:56.336 "params": { 00:04:56.336 "timeout_sec": 30 00:04:56.336 } 00:04:56.336 }, 00:04:56.336 { 00:04:56.336 "method": "bdev_nvme_set_options", 00:04:56.336 "params": { 00:04:56.336 "action_on_timeout": "none", 00:04:56.336 "timeout_us": 0, 00:04:56.336 "timeout_admin_us": 0, 00:04:56.336 "keep_alive_timeout_ms": 10000, 00:04:56.336 "arbitration_burst": 0, 00:04:56.336 "low_priority_weight": 0, 00:04:56.336 "medium_priority_weight": 0, 00:04:56.336 "high_priority_weight": 0, 00:04:56.336 "nvme_adminq_poll_period_us": 10000, 00:04:56.336 "nvme_ioq_poll_period_us": 0, 00:04:56.336 "io_queue_requests": 0, 00:04:56.336 "delay_cmd_submit": true, 00:04:56.336 "transport_retry_count": 4, 00:04:56.336 "bdev_retry_count": 3, 00:04:56.336 "transport_ack_timeout": 0, 00:04:56.336 "ctrlr_loss_timeout_sec": 0, 00:04:56.336 "reconnect_delay_sec": 0, 00:04:56.336 "fast_io_fail_timeout_sec": 0, 00:04:56.336 "disable_auto_failback": false, 00:04:56.336 "generate_uuids": false, 00:04:56.336 "transport_tos": 0, 00:04:56.336 "nvme_error_stat": false, 00:04:56.336 "rdma_srq_size": 0, 00:04:56.336 "io_path_stat": false, 00:04:56.336 "allow_accel_sequence": false, 00:04:56.336 "rdma_max_cq_size": 0, 00:04:56.336 "rdma_cm_event_timeout_ms": 0, 00:04:56.336 "dhchap_digests": [ 00:04:56.336 "sha256", 00:04:56.336 "sha384", 00:04:56.336 "sha512" 00:04:56.336 ], 00:04:56.336 "dhchap_dhgroups": [ 00:04:56.336 "null", 00:04:56.336 "ffdhe2048", 00:04:56.336 "ffdhe3072", 00:04:56.336 "ffdhe4096", 00:04:56.336 "ffdhe6144", 00:04:56.336 "ffdhe8192" 00:04:56.336 ] 00:04:56.336 } 00:04:56.336 }, 00:04:56.336 { 00:04:56.336 "method": "bdev_nvme_set_hotplug", 00:04:56.336 "params": { 00:04:56.336 "period_us": 100000, 00:04:56.336 "enable": false 00:04:56.336 } 00:04:56.336 }, 00:04:56.336 { 00:04:56.336 "method": "bdev_wait_for_examine" 00:04:56.336 } 00:04:56.336 ] 00:04:56.336 }, 00:04:56.336 { 00:04:56.336 "subsystem": "scsi", 00:04:56.336 "config": null 00:04:56.336 }, 00:04:56.336 { 00:04:56.336 "subsystem": "scheduler", 00:04:56.336 "config": [ 00:04:56.336 { 00:04:56.336 "method": "framework_set_scheduler", 00:04:56.336 "params": { 00:04:56.336 "name": "static" 00:04:56.336 } 00:04:56.336 } 00:04:56.336 ] 00:04:56.336 }, 00:04:56.336 { 00:04:56.336 "subsystem": "vhost_scsi", 00:04:56.336 "config": [] 00:04:56.336 }, 00:04:56.336 { 00:04:56.336 "subsystem": "vhost_blk", 00:04:56.336 "config": [] 00:04:56.336 }, 00:04:56.336 { 00:04:56.336 "subsystem": "ublk", 00:04:56.336 "config": [] 00:04:56.336 }, 00:04:56.336 { 00:04:56.336 "subsystem": "nbd", 00:04:56.336 "config": [] 00:04:56.336 }, 00:04:56.336 { 00:04:56.336 "subsystem": "nvmf", 00:04:56.336 "config": [ 00:04:56.336 { 00:04:56.336 "method": "nvmf_set_config", 00:04:56.336 "params": { 00:04:56.336 "discovery_filter": "match_any", 00:04:56.336 "admin_cmd_passthru": { 00:04:56.336 "identify_ctrlr": false 00:04:56.336 } 00:04:56.336 } 00:04:56.336 }, 00:04:56.336 { 00:04:56.336 "method": "nvmf_set_max_subsystems", 00:04:56.336 "params": { 00:04:56.336 "max_subsystems": 1024 00:04:56.336 } 00:04:56.336 }, 00:04:56.336 { 00:04:56.336 "method": "nvmf_set_crdt", 00:04:56.336 "params": { 00:04:56.336 "crdt1": 0, 00:04:56.336 "crdt2": 0, 00:04:56.336 "crdt3": 0 00:04:56.336 } 00:04:56.336 }, 00:04:56.336 { 00:04:56.336 "method": "nvmf_create_transport", 00:04:56.336 "params": { 00:04:56.336 "trtype": "TCP", 00:04:56.336 "max_queue_depth": 128, 00:04:56.336 "max_io_qpairs_per_ctrlr": 127, 00:04:56.336 "in_capsule_data_size": 4096, 00:04:56.336 "max_io_size": 131072, 00:04:56.336 "io_unit_size": 131072, 00:04:56.336 "max_aq_depth": 128, 00:04:56.336 "num_shared_buffers": 511, 00:04:56.336 "buf_cache_size": 4294967295, 00:04:56.336 "dif_insert_or_strip": false, 00:04:56.336 "zcopy": false, 00:04:56.336 "c2h_success": true, 00:04:56.336 "sock_priority": 0, 00:04:56.336 "abort_timeout_sec": 1, 00:04:56.336 "ack_timeout": 0, 00:04:56.336 "data_wr_pool_size": 0 00:04:56.336 } 00:04:56.336 } 00:04:56.336 ] 00:04:56.336 }, 00:04:56.336 { 00:04:56.336 "subsystem": "iscsi", 00:04:56.336 "config": [ 00:04:56.336 { 00:04:56.336 "method": "iscsi_set_options", 00:04:56.336 "params": { 00:04:56.336 "node_base": "iqn.2016-06.io.spdk", 00:04:56.336 "max_sessions": 128, 00:04:56.336 "max_connections_per_session": 2, 00:04:56.336 "max_queue_depth": 64, 00:04:56.336 "default_time2wait": 2, 00:04:56.336 "default_time2retain": 20, 00:04:56.336 "first_burst_length": 8192, 00:04:56.336 "immediate_data": true, 00:04:56.336 "allow_duplicated_isid": false, 00:04:56.336 "error_recovery_level": 0, 00:04:56.336 "nop_timeout": 60, 00:04:56.336 "nop_in_interval": 30, 00:04:56.336 "disable_chap": false, 00:04:56.336 "require_chap": false, 00:04:56.336 "mutual_chap": false, 00:04:56.337 "chap_group": 0, 00:04:56.337 "max_large_datain_per_connection": 64, 00:04:56.337 "max_r2t_per_connection": 4, 00:04:56.337 "pdu_pool_size": 36864, 00:04:56.337 "immediate_data_pool_size": 16384, 00:04:56.337 "data_out_pool_size": 2048 00:04:56.337 } 00:04:56.337 } 00:04:56.337 ] 00:04:56.337 } 00:04:56.337 ] 00:04:56.337 } 00:04:56.337 13:00:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:56.337 13:00:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 62362 00:04:56.337 13:00:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 62362 ']' 00:04:56.337 13:00:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 62362 00:04:56.337 13:00:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:56.337 13:00:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:56.337 13:00:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62362 00:04:56.337 13:00:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:56.337 killing process with pid 62362 00:04:56.337 13:00:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:56.337 13:00:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62362' 00:04:56.337 13:00:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 62362 00:04:56.337 13:00:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 62362 00:04:58.248 13:00:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=62407 00:04:58.248 13:00:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:58.248 13:00:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:03.537 13:00:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 62407 00:05:03.537 13:00:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 62407 ']' 00:05:03.537 13:00:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 62407 00:05:03.537 13:00:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:03.537 13:00:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:03.537 13:00:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62407 00:05:03.537 killing process with pid 62407 00:05:03.537 13:00:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:03.537 13:00:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:03.537 13:00:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62407' 00:05:03.537 13:00:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 62407 00:05:03.537 13:00:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 62407 00:05:05.437 13:00:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:05.437 13:00:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:05.437 00:05:05.437 real 0m10.065s 00:05:05.437 user 0m9.764s 00:05:05.437 sys 0m0.653s 00:05:05.437 ************************************ 00:05:05.437 END TEST skip_rpc_with_json 00:05:05.437 ************************************ 00:05:05.437 13:00:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.437 13:00:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:05.437 13:00:57 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:05.437 13:00:57 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.437 13:00:57 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.437 13:00:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.437 ************************************ 00:05:05.437 START TEST skip_rpc_with_delay 00:05:05.437 ************************************ 00:05:05.437 13:00:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:05.437 13:00:57 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:05.437 13:00:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:05.437 13:00:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:05.437 13:00:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:05.437 13:00:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:05.437 13:00:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:05.437 13:00:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:05.437 13:00:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:05.437 13:00:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:05.437 13:00:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:05.437 13:00:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:05.437 13:00:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:05.437 [2024-07-25 13:00:57.416731] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:05.437 [2024-07-25 13:00:57.416874] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:05.437 ************************************ 00:05:05.437 END TEST skip_rpc_with_delay 00:05:05.437 ************************************ 00:05:05.437 13:00:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:05.437 13:00:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:05.437 13:00:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:05.437 13:00:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:05.437 00:05:05.437 real 0m0.156s 00:05:05.437 user 0m0.097s 00:05:05.437 sys 0m0.057s 00:05:05.437 13:00:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.437 13:00:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:05.437 13:00:57 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:05.437 13:00:57 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:05.437 13:00:57 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:05.437 13:00:57 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.437 13:00:57 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.437 13:00:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.437 ************************************ 00:05:05.437 START TEST exit_on_failed_rpc_init 00:05:05.437 ************************************ 00:05:05.437 13:00:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:05.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.437 13:00:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=62535 00:05:05.437 13:00:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 62535 00:05:05.437 13:00:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:05.437 13:00:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 62535 ']' 00:05:05.437 13:00:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.437 13:00:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:05.438 13:00:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.438 13:00:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:05.438 13:00:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:05.696 [2024-07-25 13:00:57.647511] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:05.696 [2024-07-25 13:00:57.647694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62535 ] 00:05:05.696 [2024-07-25 13:00:57.819353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.954 [2024-07-25 13:00:57.999800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.519 13:00:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:06.519 13:00:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:06.519 13:00:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:06.520 13:00:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:06.520 13:00:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:06.520 13:00:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:06.520 13:00:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:06.520 13:00:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:06.520 13:00:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:06.520 13:00:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:06.520 13:00:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:06.520 13:00:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:06.520 13:00:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:06.520 13:00:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:06.520 13:00:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:06.777 [2024-07-25 13:00:58.808888] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:06.777 [2024-07-25 13:00:58.809283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62554 ] 00:05:07.036 [2024-07-25 13:00:58.982040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.036 [2024-07-25 13:00:59.167020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.036 [2024-07-25 13:00:59.167394] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:07.036 [2024-07-25 13:00:59.167567] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:07.036 [2024-07-25 13:00:59.167700] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:07.602 13:00:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:07.602 13:00:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:07.602 13:00:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:07.602 13:00:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:07.602 13:00:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:07.602 13:00:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:07.602 13:00:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:07.602 13:00:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 62535 00:05:07.602 13:00:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 62535 ']' 00:05:07.602 13:00:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 62535 00:05:07.602 13:00:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:07.602 13:00:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:07.602 13:00:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62535 00:05:07.602 killing process with pid 62535 00:05:07.602 13:00:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:07.602 13:00:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:07.602 13:00:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62535' 00:05:07.602 13:00:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 62535 00:05:07.602 13:00:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 62535 00:05:09.505 00:05:09.505 real 0m4.077s 00:05:09.505 user 0m4.710s 00:05:09.505 sys 0m0.529s 00:05:09.505 13:01:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.505 13:01:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:09.505 ************************************ 00:05:09.505 END TEST exit_on_failed_rpc_init 00:05:09.505 ************************************ 00:05:09.505 13:01:01 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:09.505 00:05:09.505 real 0m21.397s 00:05:09.505 user 0m21.084s 00:05:09.505 sys 0m1.690s 00:05:09.505 ************************************ 00:05:09.505 END TEST skip_rpc 00:05:09.505 ************************************ 00:05:09.505 13:01:01 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.505 13:01:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.764 13:01:01 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:09.764 13:01:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.764 13:01:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.764 13:01:01 -- common/autotest_common.sh@10 -- # set +x 00:05:09.764 ************************************ 00:05:09.764 START TEST rpc_client 00:05:09.764 ************************************ 00:05:09.764 13:01:01 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:09.764 * Looking for test storage... 00:05:09.764 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:09.764 13:01:01 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:09.764 OK 00:05:09.764 13:01:01 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:09.764 00:05:09.764 real 0m0.137s 00:05:09.764 user 0m0.064s 00:05:09.764 sys 0m0.077s 00:05:09.764 13:01:01 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.764 13:01:01 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:09.764 ************************************ 00:05:09.764 END TEST rpc_client 00:05:09.764 ************************************ 00:05:09.764 13:01:01 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:09.764 13:01:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.764 13:01:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.764 13:01:01 -- common/autotest_common.sh@10 -- # set +x 00:05:09.764 ************************************ 00:05:09.764 START TEST json_config 00:05:09.764 ************************************ 00:05:09.764 13:01:01 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:09.764 13:01:01 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:09.764 13:01:01 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:09.764 13:01:01 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:09.764 13:01:01 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:09.764 13:01:01 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:09.764 13:01:01 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:09.764 13:01:01 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:09.764 13:01:01 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:09.764 13:01:01 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:09.764 13:01:01 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:09.764 13:01:01 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:09.764 13:01:01 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:10.023 13:01:01 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81f3884e-77f2-48f6-93b2-e58369b5121e 00:05:10.023 13:01:01 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=81f3884e-77f2-48f6-93b2-e58369b5121e 00:05:10.023 13:01:01 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:10.023 13:01:01 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:10.023 13:01:01 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:10.023 13:01:01 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:10.023 13:01:01 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:10.023 13:01:01 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:10.023 13:01:01 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:10.023 13:01:01 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:10.023 13:01:01 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.023 13:01:01 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.023 13:01:01 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.023 13:01:01 json_config -- paths/export.sh@5 -- # export PATH 00:05:10.024 13:01:01 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.024 13:01:01 json_config -- nvmf/common.sh@47 -- # : 0 00:05:10.024 13:01:01 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:10.024 13:01:01 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:10.024 13:01:01 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:10.024 13:01:01 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:10.024 13:01:01 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:10.024 13:01:01 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:10.024 13:01:01 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:10.024 13:01:01 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:10.024 13:01:01 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:10.024 13:01:01 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:10.024 13:01:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:10.024 13:01:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:10.024 13:01:01 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:10.024 WARNING: No tests are enabled so not running JSON configuration tests 00:05:10.024 13:01:01 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:10.024 13:01:01 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:10.024 00:05:10.024 real 0m0.082s 00:05:10.024 user 0m0.040s 00:05:10.024 sys 0m0.038s 00:05:10.024 ************************************ 00:05:10.024 END TEST json_config 00:05:10.024 ************************************ 00:05:10.024 13:01:01 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.024 13:01:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.024 13:01:02 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:10.024 13:01:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.024 13:01:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.024 13:01:02 -- common/autotest_common.sh@10 -- # set +x 00:05:10.024 ************************************ 00:05:10.024 START TEST json_config_extra_key 00:05:10.024 ************************************ 00:05:10.024 13:01:02 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:10.024 13:01:02 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:10.024 13:01:02 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:10.024 13:01:02 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:10.024 13:01:02 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:10.024 13:01:02 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:10.024 13:01:02 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:10.024 13:01:02 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:10.024 13:01:02 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:10.024 13:01:02 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:10.024 13:01:02 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:10.024 13:01:02 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:10.024 13:01:02 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:10.024 13:01:02 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:81f3884e-77f2-48f6-93b2-e58369b5121e 00:05:10.024 13:01:02 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=81f3884e-77f2-48f6-93b2-e58369b5121e 00:05:10.024 13:01:02 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:10.024 13:01:02 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:10.024 13:01:02 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:10.024 13:01:02 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:10.024 13:01:02 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:10.024 13:01:02 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:10.024 13:01:02 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:10.024 13:01:02 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:10.024 13:01:02 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.024 13:01:02 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.024 13:01:02 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.024 13:01:02 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:10.024 13:01:02 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.024 13:01:02 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:10.024 13:01:02 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:10.024 13:01:02 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:10.024 13:01:02 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:10.024 13:01:02 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:10.024 13:01:02 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:10.024 13:01:02 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:10.024 13:01:02 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:10.024 13:01:02 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:10.024 13:01:02 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:10.024 13:01:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:10.024 13:01:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:10.024 13:01:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:10.024 13:01:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:10.024 13:01:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:10.024 13:01:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:10.024 13:01:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:10.024 13:01:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:10.024 13:01:02 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:10.024 13:01:02 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:10.024 INFO: launching applications... 00:05:10.024 13:01:02 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:10.024 13:01:02 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:10.024 13:01:02 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:10.024 13:01:02 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:10.024 Waiting for target to run... 00:05:10.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:10.024 13:01:02 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:10.024 13:01:02 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:10.024 13:01:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:10.024 13:01:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:10.024 13:01:02 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=62739 00:05:10.024 13:01:02 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:10.024 13:01:02 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 62739 /var/tmp/spdk_tgt.sock 00:05:10.024 13:01:02 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 62739 ']' 00:05:10.024 13:01:02 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:10.024 13:01:02 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:10.024 13:01:02 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:10.024 13:01:02 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:10.024 13:01:02 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:10.024 13:01:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:10.283 [2024-07-25 13:01:02.218657] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:10.283 [2024-07-25 13:01:02.218857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62739 ] 00:05:10.541 [2024-07-25 13:01:02.555245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.541 [2024-07-25 13:01:02.714352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.477 00:05:11.477 13:01:03 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:11.477 13:01:03 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:11.477 13:01:03 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:11.477 INFO: shutting down applications... 00:05:11.477 13:01:03 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:11.477 13:01:03 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:11.477 13:01:03 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:11.477 13:01:03 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:11.477 13:01:03 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 62739 ]] 00:05:11.477 13:01:03 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 62739 00:05:11.477 13:01:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:11.477 13:01:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.477 13:01:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62739 00:05:11.477 13:01:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:11.734 13:01:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:11.734 13:01:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.734 13:01:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62739 00:05:11.734 13:01:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:12.300 13:01:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:12.300 13:01:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:12.300 13:01:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62739 00:05:12.300 13:01:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:12.868 13:01:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:12.868 13:01:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:12.868 13:01:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62739 00:05:12.868 13:01:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:13.436 13:01:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:13.436 13:01:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:13.436 13:01:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62739 00:05:13.436 13:01:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:13.695 13:01:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:13.695 13:01:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:13.695 13:01:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62739 00:05:13.695 13:01:05 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:13.695 13:01:05 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:13.695 SPDK target shutdown done 00:05:13.695 Success 00:05:13.695 13:01:05 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:13.695 13:01:05 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:13.695 13:01:05 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:13.695 ************************************ 00:05:13.695 END TEST json_config_extra_key 00:05:13.695 ************************************ 00:05:13.695 00:05:13.695 real 0m3.812s 00:05:13.695 user 0m3.539s 00:05:13.695 sys 0m0.447s 00:05:13.695 13:01:05 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.695 13:01:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:13.695 13:01:05 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:13.695 13:01:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.695 13:01:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.695 13:01:05 -- common/autotest_common.sh@10 -- # set +x 00:05:13.954 ************************************ 00:05:13.954 START TEST alias_rpc 00:05:13.954 ************************************ 00:05:13.954 13:01:05 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:13.954 * Looking for test storage... 00:05:13.954 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:13.954 13:01:05 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:13.954 13:01:05 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=62825 00:05:13.954 13:01:05 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:13.954 13:01:05 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 62825 00:05:13.954 13:01:05 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 62825 ']' 00:05:13.954 13:01:05 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.954 13:01:05 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:13.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.954 13:01:05 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.954 13:01:05 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:13.954 13:01:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.954 [2024-07-25 13:01:06.081204] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:13.954 [2024-07-25 13:01:06.081376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62825 ] 00:05:14.212 [2024-07-25 13:01:06.254407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.470 [2024-07-25 13:01:06.430550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.038 13:01:07 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:15.038 13:01:07 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:15.038 13:01:07 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:15.296 13:01:07 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 62825 00:05:15.296 13:01:07 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 62825 ']' 00:05:15.296 13:01:07 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 62825 00:05:15.296 13:01:07 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:15.296 13:01:07 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:15.296 13:01:07 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62825 00:05:15.296 killing process with pid 62825 00:05:15.296 13:01:07 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:15.296 13:01:07 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:15.296 13:01:07 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62825' 00:05:15.296 13:01:07 alias_rpc -- common/autotest_common.sh@969 -- # kill 62825 00:05:15.296 13:01:07 alias_rpc -- common/autotest_common.sh@974 -- # wait 62825 00:05:17.199 ************************************ 00:05:17.199 END TEST alias_rpc 00:05:17.199 ************************************ 00:05:17.199 00:05:17.199 real 0m3.457s 00:05:17.199 user 0m3.675s 00:05:17.199 sys 0m0.457s 00:05:17.199 13:01:09 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.199 13:01:09 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.199 13:01:09 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:17.199 13:01:09 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:17.199 13:01:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:17.199 13:01:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.199 13:01:09 -- common/autotest_common.sh@10 -- # set +x 00:05:17.458 ************************************ 00:05:17.458 START TEST spdkcli_tcp 00:05:17.458 ************************************ 00:05:17.458 13:01:09 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:17.458 * Looking for test storage... 00:05:17.458 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:17.458 13:01:09 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:17.458 13:01:09 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:17.458 13:01:09 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:17.458 13:01:09 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:17.458 13:01:09 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:17.458 13:01:09 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:17.458 13:01:09 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:17.458 13:01:09 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:17.458 13:01:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:17.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.458 13:01:09 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=62924 00:05:17.458 13:01:09 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 62924 00:05:17.458 13:01:09 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:17.458 13:01:09 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 62924 ']' 00:05:17.458 13:01:09 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.458 13:01:09 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:17.458 13:01:09 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.458 13:01:09 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:17.458 13:01:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:17.458 [2024-07-25 13:01:09.577747] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:17.458 [2024-07-25 13:01:09.577911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62924 ] 00:05:17.716 [2024-07-25 13:01:09.739244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:17.974 [2024-07-25 13:01:09.922037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.974 [2024-07-25 13:01:09.922049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.540 13:01:10 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:18.540 13:01:10 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:18.540 13:01:10 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:18.540 13:01:10 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=62941 00:05:18.540 13:01:10 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:18.798 [ 00:05:18.798 "bdev_malloc_delete", 00:05:18.798 "bdev_malloc_create", 00:05:18.798 "bdev_null_resize", 00:05:18.798 "bdev_null_delete", 00:05:18.798 "bdev_null_create", 00:05:18.798 "bdev_nvme_cuse_unregister", 00:05:18.798 "bdev_nvme_cuse_register", 00:05:18.798 "bdev_opal_new_user", 00:05:18.798 "bdev_opal_set_lock_state", 00:05:18.798 "bdev_opal_delete", 00:05:18.798 "bdev_opal_get_info", 00:05:18.798 "bdev_opal_create", 00:05:18.798 "bdev_nvme_opal_revert", 00:05:18.798 "bdev_nvme_opal_init", 00:05:18.798 "bdev_nvme_send_cmd", 00:05:18.798 "bdev_nvme_get_path_iostat", 00:05:18.798 "bdev_nvme_get_mdns_discovery_info", 00:05:18.798 "bdev_nvme_stop_mdns_discovery", 00:05:18.798 "bdev_nvme_start_mdns_discovery", 00:05:18.798 "bdev_nvme_set_multipath_policy", 00:05:18.798 "bdev_nvme_set_preferred_path", 00:05:18.798 "bdev_nvme_get_io_paths", 00:05:18.798 "bdev_nvme_remove_error_injection", 00:05:18.798 "bdev_nvme_add_error_injection", 00:05:18.798 "bdev_nvme_get_discovery_info", 00:05:18.798 "bdev_nvme_stop_discovery", 00:05:18.798 "bdev_nvme_start_discovery", 00:05:18.798 "bdev_nvme_get_controller_health_info", 00:05:18.798 "bdev_nvme_disable_controller", 00:05:18.798 "bdev_nvme_enable_controller", 00:05:18.798 "bdev_nvme_reset_controller", 00:05:18.798 "bdev_nvme_get_transport_statistics", 00:05:18.798 "bdev_nvme_apply_firmware", 00:05:18.798 "bdev_nvme_detach_controller", 00:05:18.798 "bdev_nvme_get_controllers", 00:05:18.798 "bdev_nvme_attach_controller", 00:05:18.798 "bdev_nvme_set_hotplug", 00:05:18.798 "bdev_nvme_set_options", 00:05:18.798 "bdev_passthru_delete", 00:05:18.798 "bdev_passthru_create", 00:05:18.798 "bdev_lvol_set_parent_bdev", 00:05:18.798 "bdev_lvol_set_parent", 00:05:18.798 "bdev_lvol_check_shallow_copy", 00:05:18.798 "bdev_lvol_start_shallow_copy", 00:05:18.798 "bdev_lvol_grow_lvstore", 00:05:18.798 "bdev_lvol_get_lvols", 00:05:18.798 "bdev_lvol_get_lvstores", 00:05:18.798 "bdev_lvol_delete", 00:05:18.798 "bdev_lvol_set_read_only", 00:05:18.798 "bdev_lvol_resize", 00:05:18.798 "bdev_lvol_decouple_parent", 00:05:18.798 "bdev_lvol_inflate", 00:05:18.798 "bdev_lvol_rename", 00:05:18.798 "bdev_lvol_clone_bdev", 00:05:18.798 "bdev_lvol_clone", 00:05:18.798 "bdev_lvol_snapshot", 00:05:18.798 "bdev_lvol_create", 00:05:18.798 "bdev_lvol_delete_lvstore", 00:05:18.798 "bdev_lvol_rename_lvstore", 00:05:18.798 "bdev_lvol_create_lvstore", 00:05:18.798 "bdev_raid_set_options", 00:05:18.798 "bdev_raid_remove_base_bdev", 00:05:18.798 "bdev_raid_add_base_bdev", 00:05:18.798 "bdev_raid_delete", 00:05:18.798 "bdev_raid_create", 00:05:18.798 "bdev_raid_get_bdevs", 00:05:18.798 "bdev_error_inject_error", 00:05:18.798 "bdev_error_delete", 00:05:18.798 "bdev_error_create", 00:05:18.798 "bdev_split_delete", 00:05:18.798 "bdev_split_create", 00:05:18.798 "bdev_delay_delete", 00:05:18.798 "bdev_delay_create", 00:05:18.798 "bdev_delay_update_latency", 00:05:18.798 "bdev_zone_block_delete", 00:05:18.798 "bdev_zone_block_create", 00:05:18.798 "blobfs_create", 00:05:18.798 "blobfs_detect", 00:05:18.798 "blobfs_set_cache_size", 00:05:18.798 "bdev_xnvme_delete", 00:05:18.798 "bdev_xnvme_create", 00:05:18.798 "bdev_aio_delete", 00:05:18.798 "bdev_aio_rescan", 00:05:18.798 "bdev_aio_create", 00:05:18.798 "bdev_ftl_set_property", 00:05:18.798 "bdev_ftl_get_properties", 00:05:18.798 "bdev_ftl_get_stats", 00:05:18.798 "bdev_ftl_unmap", 00:05:18.798 "bdev_ftl_unload", 00:05:18.798 "bdev_ftl_delete", 00:05:18.798 "bdev_ftl_load", 00:05:18.798 "bdev_ftl_create", 00:05:18.798 "bdev_virtio_attach_controller", 00:05:18.798 "bdev_virtio_scsi_get_devices", 00:05:18.798 "bdev_virtio_detach_controller", 00:05:18.798 "bdev_virtio_blk_set_hotplug", 00:05:18.798 "bdev_iscsi_delete", 00:05:18.798 "bdev_iscsi_create", 00:05:18.798 "bdev_iscsi_set_options", 00:05:18.798 "accel_error_inject_error", 00:05:18.798 "ioat_scan_accel_module", 00:05:18.799 "dsa_scan_accel_module", 00:05:18.799 "iaa_scan_accel_module", 00:05:18.799 "keyring_file_remove_key", 00:05:18.799 "keyring_file_add_key", 00:05:18.799 "keyring_linux_set_options", 00:05:18.799 "iscsi_get_histogram", 00:05:18.799 "iscsi_enable_histogram", 00:05:18.799 "iscsi_set_options", 00:05:18.799 "iscsi_get_auth_groups", 00:05:18.799 "iscsi_auth_group_remove_secret", 00:05:18.799 "iscsi_auth_group_add_secret", 00:05:18.799 "iscsi_delete_auth_group", 00:05:18.799 "iscsi_create_auth_group", 00:05:18.799 "iscsi_set_discovery_auth", 00:05:18.799 "iscsi_get_options", 00:05:18.799 "iscsi_target_node_request_logout", 00:05:18.799 "iscsi_target_node_set_redirect", 00:05:18.799 "iscsi_target_node_set_auth", 00:05:18.799 "iscsi_target_node_add_lun", 00:05:18.799 "iscsi_get_stats", 00:05:18.799 "iscsi_get_connections", 00:05:18.799 "iscsi_portal_group_set_auth", 00:05:18.799 "iscsi_start_portal_group", 00:05:18.799 "iscsi_delete_portal_group", 00:05:18.799 "iscsi_create_portal_group", 00:05:18.799 "iscsi_get_portal_groups", 00:05:18.799 "iscsi_delete_target_node", 00:05:18.799 "iscsi_target_node_remove_pg_ig_maps", 00:05:18.799 "iscsi_target_node_add_pg_ig_maps", 00:05:18.799 "iscsi_create_target_node", 00:05:18.799 "iscsi_get_target_nodes", 00:05:18.799 "iscsi_delete_initiator_group", 00:05:18.799 "iscsi_initiator_group_remove_initiators", 00:05:18.799 "iscsi_initiator_group_add_initiators", 00:05:18.799 "iscsi_create_initiator_group", 00:05:18.799 "iscsi_get_initiator_groups", 00:05:18.799 "nvmf_set_crdt", 00:05:18.799 "nvmf_set_config", 00:05:18.799 "nvmf_set_max_subsystems", 00:05:18.799 "nvmf_stop_mdns_prr", 00:05:18.799 "nvmf_publish_mdns_prr", 00:05:18.799 "nvmf_subsystem_get_listeners", 00:05:18.799 "nvmf_subsystem_get_qpairs", 00:05:18.799 "nvmf_subsystem_get_controllers", 00:05:18.799 "nvmf_get_stats", 00:05:18.799 "nvmf_get_transports", 00:05:18.799 "nvmf_create_transport", 00:05:18.799 "nvmf_get_targets", 00:05:18.799 "nvmf_delete_target", 00:05:18.799 "nvmf_create_target", 00:05:18.799 "nvmf_subsystem_allow_any_host", 00:05:18.799 "nvmf_subsystem_remove_host", 00:05:18.799 "nvmf_subsystem_add_host", 00:05:18.799 "nvmf_ns_remove_host", 00:05:18.799 "nvmf_ns_add_host", 00:05:18.799 "nvmf_subsystem_remove_ns", 00:05:18.799 "nvmf_subsystem_add_ns", 00:05:18.799 "nvmf_subsystem_listener_set_ana_state", 00:05:18.799 "nvmf_discovery_get_referrals", 00:05:18.799 "nvmf_discovery_remove_referral", 00:05:18.799 "nvmf_discovery_add_referral", 00:05:18.799 "nvmf_subsystem_remove_listener", 00:05:18.799 "nvmf_subsystem_add_listener", 00:05:18.799 "nvmf_delete_subsystem", 00:05:18.799 "nvmf_create_subsystem", 00:05:18.799 "nvmf_get_subsystems", 00:05:18.799 "env_dpdk_get_mem_stats", 00:05:18.799 "nbd_get_disks", 00:05:18.799 "nbd_stop_disk", 00:05:18.799 "nbd_start_disk", 00:05:18.799 "ublk_recover_disk", 00:05:18.799 "ublk_get_disks", 00:05:18.799 "ublk_stop_disk", 00:05:18.799 "ublk_start_disk", 00:05:18.799 "ublk_destroy_target", 00:05:18.799 "ublk_create_target", 00:05:18.799 "virtio_blk_create_transport", 00:05:18.799 "virtio_blk_get_transports", 00:05:18.799 "vhost_controller_set_coalescing", 00:05:18.799 "vhost_get_controllers", 00:05:18.799 "vhost_delete_controller", 00:05:18.799 "vhost_create_blk_controller", 00:05:18.799 "vhost_scsi_controller_remove_target", 00:05:18.799 "vhost_scsi_controller_add_target", 00:05:18.799 "vhost_start_scsi_controller", 00:05:18.799 "vhost_create_scsi_controller", 00:05:18.799 "thread_set_cpumask", 00:05:18.799 "framework_get_governor", 00:05:18.799 "framework_get_scheduler", 00:05:18.799 "framework_set_scheduler", 00:05:18.799 "framework_get_reactors", 00:05:18.799 "thread_get_io_channels", 00:05:18.799 "thread_get_pollers", 00:05:18.799 "thread_get_stats", 00:05:18.799 "framework_monitor_context_switch", 00:05:18.799 "spdk_kill_instance", 00:05:18.799 "log_enable_timestamps", 00:05:18.799 "log_get_flags", 00:05:18.799 "log_clear_flag", 00:05:18.799 "log_set_flag", 00:05:18.799 "log_get_level", 00:05:18.799 "log_set_level", 00:05:18.799 "log_get_print_level", 00:05:18.799 "log_set_print_level", 00:05:18.799 "framework_enable_cpumask_locks", 00:05:18.799 "framework_disable_cpumask_locks", 00:05:18.799 "framework_wait_init", 00:05:18.799 "framework_start_init", 00:05:18.799 "scsi_get_devices", 00:05:18.799 "bdev_get_histogram", 00:05:18.799 "bdev_enable_histogram", 00:05:18.799 "bdev_set_qos_limit", 00:05:18.799 "bdev_set_qd_sampling_period", 00:05:18.799 "bdev_get_bdevs", 00:05:18.799 "bdev_reset_iostat", 00:05:18.799 "bdev_get_iostat", 00:05:18.799 "bdev_examine", 00:05:18.799 "bdev_wait_for_examine", 00:05:18.799 "bdev_set_options", 00:05:18.799 "notify_get_notifications", 00:05:18.799 "notify_get_types", 00:05:18.799 "accel_get_stats", 00:05:18.799 "accel_set_options", 00:05:18.799 "accel_set_driver", 00:05:18.799 "accel_crypto_key_destroy", 00:05:18.799 "accel_crypto_keys_get", 00:05:18.799 "accel_crypto_key_create", 00:05:18.799 "accel_assign_opc", 00:05:18.799 "accel_get_module_info", 00:05:18.799 "accel_get_opc_assignments", 00:05:18.799 "vmd_rescan", 00:05:18.799 "vmd_remove_device", 00:05:18.799 "vmd_enable", 00:05:18.799 "sock_get_default_impl", 00:05:18.799 "sock_set_default_impl", 00:05:18.799 "sock_impl_set_options", 00:05:18.799 "sock_impl_get_options", 00:05:18.799 "iobuf_get_stats", 00:05:18.799 "iobuf_set_options", 00:05:18.799 "framework_get_pci_devices", 00:05:18.799 "framework_get_config", 00:05:18.799 "framework_get_subsystems", 00:05:18.799 "trace_get_info", 00:05:18.799 "trace_get_tpoint_group_mask", 00:05:18.799 "trace_disable_tpoint_group", 00:05:18.799 "trace_enable_tpoint_group", 00:05:18.799 "trace_clear_tpoint_mask", 00:05:18.799 "trace_set_tpoint_mask", 00:05:18.799 "keyring_get_keys", 00:05:18.799 "spdk_get_version", 00:05:18.799 "rpc_get_methods" 00:05:18.799 ] 00:05:18.799 13:01:10 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:18.799 13:01:10 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:18.799 13:01:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:18.799 13:01:10 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:18.799 13:01:10 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 62924 00:05:18.799 13:01:10 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 62924 ']' 00:05:18.799 13:01:10 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 62924 00:05:18.799 13:01:10 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:18.799 13:01:10 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:18.799 13:01:10 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62924 00:05:18.799 killing process with pid 62924 00:05:18.799 13:01:10 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:18.799 13:01:10 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:18.799 13:01:10 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62924' 00:05:18.799 13:01:10 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 62924 00:05:18.799 13:01:10 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 62924 00:05:21.388 ************************************ 00:05:21.388 END TEST spdkcli_tcp 00:05:21.388 ************************************ 00:05:21.388 00:05:21.388 real 0m3.585s 00:05:21.388 user 0m6.529s 00:05:21.388 sys 0m0.464s 00:05:21.388 13:01:12 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:21.388 13:01:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:21.388 13:01:13 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:21.388 13:01:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.388 13:01:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.388 13:01:13 -- common/autotest_common.sh@10 -- # set +x 00:05:21.388 ************************************ 00:05:21.388 START TEST dpdk_mem_utility 00:05:21.388 ************************************ 00:05:21.388 13:01:13 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:21.388 * Looking for test storage... 00:05:21.388 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:21.388 13:01:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:21.388 13:01:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=63027 00:05:21.388 13:01:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:21.388 13:01:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 63027 00:05:21.388 13:01:13 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 63027 ']' 00:05:21.388 13:01:13 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.388 13:01:13 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:21.388 13:01:13 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.388 13:01:13 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:21.388 13:01:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:21.388 [2024-07-25 13:01:13.246157] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:21.388 [2024-07-25 13:01:13.246500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63027 ] 00:05:21.388 [2024-07-25 13:01:13.416943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.646 [2024-07-25 13:01:13.595832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.212 13:01:14 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:22.212 13:01:14 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:22.212 13:01:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:22.212 13:01:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:22.212 13:01:14 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.212 13:01:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:22.212 { 00:05:22.212 "filename": "/tmp/spdk_mem_dump.txt" 00:05:22.212 } 00:05:22.212 13:01:14 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.213 13:01:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:22.213 DPDK memory size 820.000000 MiB in 1 heap(s) 00:05:22.213 1 heaps totaling size 820.000000 MiB 00:05:22.213 size: 820.000000 MiB heap id: 0 00:05:22.213 end heaps---------- 00:05:22.213 8 mempools totaling size 598.116089 MiB 00:05:22.213 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:22.213 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:22.213 size: 84.521057 MiB name: bdev_io_63027 00:05:22.213 size: 51.011292 MiB name: evtpool_63027 00:05:22.213 size: 50.003479 MiB name: msgpool_63027 00:05:22.213 size: 21.763794 MiB name: PDU_Pool 00:05:22.213 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:22.213 size: 0.026123 MiB name: Session_Pool 00:05:22.213 end mempools------- 00:05:22.213 6 memzones totaling size 4.142822 MiB 00:05:22.213 size: 1.000366 MiB name: RG_ring_0_63027 00:05:22.213 size: 1.000366 MiB name: RG_ring_1_63027 00:05:22.213 size: 1.000366 MiB name: RG_ring_4_63027 00:05:22.213 size: 1.000366 MiB name: RG_ring_5_63027 00:05:22.213 size: 0.125366 MiB name: RG_ring_2_63027 00:05:22.213 size: 0.015991 MiB name: RG_ring_3_63027 00:05:22.213 end memzones------- 00:05:22.213 13:01:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:22.474 heap id: 0 total size: 820.000000 MiB number of busy elements: 296 number of free elements: 18 00:05:22.474 list of free elements. size: 18.452515 MiB 00:05:22.474 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:22.474 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:22.474 element at address: 0x200007000000 with size: 1.995972 MiB 00:05:22.474 element at address: 0x20000b200000 with size: 1.995972 MiB 00:05:22.474 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:22.474 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:22.474 element at address: 0x200019600000 with size: 0.999084 MiB 00:05:22.474 element at address: 0x200003e00000 with size: 0.996094 MiB 00:05:22.474 element at address: 0x200032200000 with size: 0.994324 MiB 00:05:22.474 element at address: 0x200018e00000 with size: 0.959656 MiB 00:05:22.474 element at address: 0x200019900040 with size: 0.936401 MiB 00:05:22.474 element at address: 0x200000200000 with size: 0.830200 MiB 00:05:22.474 element at address: 0x20001b000000 with size: 0.564880 MiB 00:05:22.474 element at address: 0x200019200000 with size: 0.487976 MiB 00:05:22.474 element at address: 0x200019a00000 with size: 0.485413 MiB 00:05:22.474 element at address: 0x200013800000 with size: 0.467896 MiB 00:05:22.474 element at address: 0x200028400000 with size: 0.390442 MiB 00:05:22.474 element at address: 0x200003a00000 with size: 0.351990 MiB 00:05:22.474 list of standard malloc elements. size: 199.283081 MiB 00:05:22.474 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:05:22.474 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:05:22.474 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:22.474 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:22.474 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:22.474 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:22.474 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:05:22.474 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:22.474 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:05:22.474 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:05:22.474 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:05:22.474 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:22.474 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:05:22.474 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:05:22.474 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:05:22.474 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:05:22.474 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:05:22.474 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:05:22.474 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:05:22.474 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:05:22.474 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:05:22.474 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:05:22.474 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:05:22.474 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:05:22.474 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:05:22.474 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:05:22.474 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:05:22.474 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:05:22.474 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:05:22.474 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:22.474 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:22.474 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:22.474 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:05:22.474 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:05:22.474 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:05:22.474 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:05:22.474 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:05:22.474 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:05:22.474 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:05:22.474 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:05:22.474 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:05:22.474 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:05:22.474 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:05:22.474 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:05:22.474 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:05:22.474 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:05:22.474 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:05:22.475 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:05:22.475 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:05:22.475 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:05:22.475 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:05:22.475 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:05:22.475 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:05:22.475 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:05:22.475 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:05:22.475 element at address: 0x200013877c80 with size: 0.000244 MiB 00:05:22.475 element at address: 0x200013877d80 with size: 0.000244 MiB 00:05:22.475 element at address: 0x200013877e80 with size: 0.000244 MiB 00:05:22.475 element at address: 0x200013877f80 with size: 0.000244 MiB 00:05:22.475 element at address: 0x200013878080 with size: 0.000244 MiB 00:05:22.475 element at address: 0x200013878180 with size: 0.000244 MiB 00:05:22.475 element at address: 0x200013878280 with size: 0.000244 MiB 00:05:22.475 element at address: 0x200013878380 with size: 0.000244 MiB 00:05:22.475 element at address: 0x200013878480 with size: 0.000244 MiB 00:05:22.475 element at address: 0x200013878580 with size: 0.000244 MiB 00:05:22.475 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:22.475 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:05:22.475 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x200019abc680 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:05:22.475 element at address: 0x200028463f40 with size: 0.000244 MiB 00:05:22.475 element at address: 0x200028464040 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20002846af80 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20002846b080 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20002846b180 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20002846b280 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20002846b380 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20002846b480 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20002846b580 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20002846b680 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20002846b780 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20002846b880 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20002846b980 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20002846be80 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20002846c080 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20002846c180 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20002846c280 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20002846c380 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20002846c480 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20002846c580 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20002846c680 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20002846c780 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20002846c880 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20002846c980 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:05:22.475 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846d080 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846d180 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846d280 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846d380 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846d480 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846d580 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846d680 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846d780 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846d880 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846d980 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846da80 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846db80 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846de80 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846df80 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846e080 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846e180 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846e280 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846e380 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846e480 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846e580 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846e680 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846e780 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846e880 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846e980 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846f080 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846f180 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846f280 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846f380 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846f480 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846f580 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846f680 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846f780 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846f880 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846f980 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:05:22.476 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:05:22.476 list of memzone associated elements. size: 602.264404 MiB 00:05:22.476 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:05:22.476 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:22.476 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:05:22.476 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:22.476 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:05:22.476 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_63027_0 00:05:22.476 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:22.476 associated memzone info: size: 48.002930 MiB name: MP_evtpool_63027_0 00:05:22.476 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:22.476 associated memzone info: size: 48.002930 MiB name: MP_msgpool_63027_0 00:05:22.476 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:05:22.476 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:22.476 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:05:22.476 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:22.476 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:22.476 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_63027 00:05:22.476 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:22.476 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_63027 00:05:22.476 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:22.476 associated memzone info: size: 1.007996 MiB name: MP_evtpool_63027 00:05:22.476 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:22.476 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:22.476 element at address: 0x200019abc780 with size: 1.008179 MiB 00:05:22.476 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:22.476 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:22.476 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:22.476 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:05:22.476 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:22.476 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:22.476 associated memzone info: size: 1.000366 MiB name: RG_ring_0_63027 00:05:22.476 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:22.476 associated memzone info: size: 1.000366 MiB name: RG_ring_1_63027 00:05:22.476 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:05:22.476 associated memzone info: size: 1.000366 MiB name: RG_ring_4_63027 00:05:22.476 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:05:22.476 associated memzone info: size: 1.000366 MiB name: RG_ring_5_63027 00:05:22.476 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:05:22.476 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_63027 00:05:22.476 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:05:22.476 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:22.476 element at address: 0x200013878680 with size: 0.500549 MiB 00:05:22.476 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:22.476 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:05:22.476 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:22.476 element at address: 0x200003adf740 with size: 0.125549 MiB 00:05:22.476 associated memzone info: size: 0.125366 MiB name: RG_ring_2_63027 00:05:22.476 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:05:22.476 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:22.476 element at address: 0x200028464140 with size: 0.023804 MiB 00:05:22.476 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:22.476 element at address: 0x200003adb500 with size: 0.016174 MiB 00:05:22.476 associated memzone info: size: 0.015991 MiB name: RG_ring_3_63027 00:05:22.476 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:05:22.476 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:22.476 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:05:22.476 associated memzone info: size: 0.000183 MiB name: MP_msgpool_63027 00:05:22.476 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:05:22.476 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_63027 00:05:22.476 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:05:22.476 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:22.476 13:01:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:22.476 13:01:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 63027 00:05:22.476 13:01:14 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 63027 ']' 00:05:22.476 13:01:14 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 63027 00:05:22.476 13:01:14 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:22.476 13:01:14 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:22.476 13:01:14 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63027 00:05:22.476 killing process with pid 63027 00:05:22.476 13:01:14 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:22.476 13:01:14 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:22.476 13:01:14 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63027' 00:05:22.476 13:01:14 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 63027 00:05:22.476 13:01:14 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 63027 00:05:24.377 ************************************ 00:05:24.377 END TEST dpdk_mem_utility 00:05:24.377 ************************************ 00:05:24.377 00:05:24.377 real 0m3.432s 00:05:24.377 user 0m3.617s 00:05:24.377 sys 0m0.413s 00:05:24.377 13:01:16 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:24.377 13:01:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:24.377 13:01:16 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:24.377 13:01:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:24.377 13:01:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:24.377 13:01:16 -- common/autotest_common.sh@10 -- # set +x 00:05:24.377 ************************************ 00:05:24.377 START TEST event 00:05:24.377 ************************************ 00:05:24.377 13:01:16 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:24.637 * Looking for test storage... 00:05:24.637 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:24.637 13:01:16 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:24.637 13:01:16 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:24.637 13:01:16 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:24.638 13:01:16 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:24.638 13:01:16 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:24.638 13:01:16 event -- common/autotest_common.sh@10 -- # set +x 00:05:24.638 ************************************ 00:05:24.638 START TEST event_perf 00:05:24.638 ************************************ 00:05:24.638 13:01:16 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:24.638 Running I/O for 1 seconds...[2024-07-25 13:01:16.650228] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:24.638 [2024-07-25 13:01:16.650393] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63127 ] 00:05:24.638 [2024-07-25 13:01:16.823263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:24.896 [2024-07-25 13:01:17.006937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.896 [2024-07-25 13:01:17.007051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:24.896 [2024-07-25 13:01:17.007189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.896 Running I/O for 1 seconds...[2024-07-25 13:01:17.007199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:26.273 00:05:26.273 lcore 0: 199486 00:05:26.273 lcore 1: 199485 00:05:26.273 lcore 2: 199487 00:05:26.273 lcore 3: 199486 00:05:26.273 done. 00:05:26.273 00:05:26.273 real 0m1.787s 00:05:26.273 user 0m4.553s 00:05:26.273 sys 0m0.111s 00:05:26.273 13:01:18 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.273 13:01:18 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:26.273 ************************************ 00:05:26.273 END TEST event_perf 00:05:26.273 ************************************ 00:05:26.273 13:01:18 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:26.273 13:01:18 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:26.273 13:01:18 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.273 13:01:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.273 ************************************ 00:05:26.273 START TEST event_reactor 00:05:26.273 ************************************ 00:05:26.273 13:01:18 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:26.531 [2024-07-25 13:01:18.486322] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:26.531 [2024-07-25 13:01:18.486647] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63167 ] 00:05:26.531 [2024-07-25 13:01:18.645208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.789 [2024-07-25 13:01:18.815940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.165 test_start 00:05:28.165 oneshot 00:05:28.165 tick 100 00:05:28.165 tick 100 00:05:28.165 tick 250 00:05:28.165 tick 100 00:05:28.165 tick 100 00:05:28.165 tick 100 00:05:28.165 tick 250 00:05:28.165 tick 500 00:05:28.165 tick 100 00:05:28.165 tick 100 00:05:28.165 tick 250 00:05:28.165 tick 100 00:05:28.165 tick 100 00:05:28.165 test_end 00:05:28.165 ************************************ 00:05:28.165 END TEST event_reactor 00:05:28.165 ************************************ 00:05:28.165 00:05:28.165 real 0m1.741s 00:05:28.165 user 0m1.549s 00:05:28.165 sys 0m0.084s 00:05:28.165 13:01:20 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.165 13:01:20 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:28.165 13:01:20 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:28.165 13:01:20 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:28.165 13:01:20 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.165 13:01:20 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.165 ************************************ 00:05:28.165 START TEST event_reactor_perf 00:05:28.165 ************************************ 00:05:28.165 13:01:20 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:28.165 [2024-07-25 13:01:20.284841] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:28.165 [2024-07-25 13:01:20.285018] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63203 ] 00:05:28.423 [2024-07-25 13:01:20.452356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.682 [2024-07-25 13:01:20.626782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.057 test_start 00:05:30.057 test_end 00:05:30.057 Performance: 292040 events per second 00:05:30.057 00:05:30.057 real 0m1.761s 00:05:30.057 user 0m1.561s 00:05:30.057 sys 0m0.090s 00:05:30.057 13:01:22 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.057 ************************************ 00:05:30.057 END TEST event_reactor_perf 00:05:30.057 ************************************ 00:05:30.057 13:01:22 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:30.057 13:01:22 event -- event/event.sh@49 -- # uname -s 00:05:30.057 13:01:22 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:30.057 13:01:22 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:30.057 13:01:22 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.057 13:01:22 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.057 13:01:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.057 ************************************ 00:05:30.057 START TEST event_scheduler 00:05:30.057 ************************************ 00:05:30.057 13:01:22 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:30.057 * Looking for test storage... 00:05:30.057 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:30.057 13:01:22 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:30.057 13:01:22 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=63271 00:05:30.057 13:01:22 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:30.057 13:01:22 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:30.057 13:01:22 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 63271 00:05:30.057 13:01:22 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 63271 ']' 00:05:30.057 13:01:22 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.057 13:01:22 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:30.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.057 13:01:22 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.057 13:01:22 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:30.057 13:01:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.057 [2024-07-25 13:01:22.234548] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:30.057 [2024-07-25 13:01:22.234710] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63271 ] 00:05:30.315 [2024-07-25 13:01:22.403828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:30.573 [2024-07-25 13:01:22.622302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.574 [2024-07-25 13:01:22.622423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.574 [2024-07-25 13:01:22.622522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.574 [2024-07-25 13:01:22.622545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:31.141 13:01:23 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:31.141 13:01:23 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:31.141 13:01:23 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:31.141 13:01:23 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.141 13:01:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:31.141 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:31.141 POWER: Cannot set governor of lcore 0 to userspace 00:05:31.141 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:31.141 POWER: Cannot set governor of lcore 0 to performance 00:05:31.141 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:31.141 POWER: Cannot set governor of lcore 0 to userspace 00:05:31.141 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:31.141 POWER: Cannot set governor of lcore 0 to userspace 00:05:31.141 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:31.141 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:31.141 POWER: Unable to set Power Management Environment for lcore 0 00:05:31.141 [2024-07-25 13:01:23.192927] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:31.141 [2024-07-25 13:01:23.192949] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:31.141 [2024-07-25 13:01:23.192965] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:31.141 [2024-07-25 13:01:23.192988] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:31.141 [2024-07-25 13:01:23.193003] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:31.141 [2024-07-25 13:01:23.193014] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:31.141 13:01:23 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.141 13:01:23 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:31.141 13:01:23 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.141 13:01:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:31.399 [2024-07-25 13:01:23.458825] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:31.399 13:01:23 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.399 13:01:23 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:31.399 13:01:23 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.399 13:01:23 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.399 13:01:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:31.399 ************************************ 00:05:31.399 START TEST scheduler_create_thread 00:05:31.399 ************************************ 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.399 2 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.399 3 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.399 4 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.399 5 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.399 6 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.399 7 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.399 8 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.399 9 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.399 10 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.399 13:01:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.773 13:01:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.773 13:01:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:32.773 13:01:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:32.773 13:01:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.773 13:01:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.710 13:01:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.710 00:05:33.710 real 0m2.139s 00:05:33.710 user 0m0.016s 00:05:33.710 sys 0m0.008s 00:05:33.710 13:01:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.710 ************************************ 00:05:33.710 END TEST scheduler_create_thread 00:05:33.710 ************************************ 00:05:33.710 13:01:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.710 13:01:25 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:33.710 13:01:25 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 63271 00:05:33.710 13:01:25 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 63271 ']' 00:05:33.710 13:01:25 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 63271 00:05:33.711 13:01:25 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:33.711 13:01:25 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:33.711 13:01:25 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63271 00:05:33.711 13:01:25 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:33.711 13:01:25 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:33.711 killing process with pid 63271 00:05:33.711 13:01:25 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63271' 00:05:33.711 13:01:25 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 63271 00:05:33.711 13:01:25 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 63271 00:05:33.970 [2024-07-25 13:01:26.089650] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:35.345 00:05:35.345 real 0m5.134s 00:05:35.345 user 0m8.642s 00:05:35.345 sys 0m0.424s 00:05:35.345 13:01:27 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.345 ************************************ 00:05:35.345 END TEST event_scheduler 00:05:35.345 ************************************ 00:05:35.345 13:01:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:35.345 13:01:27 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:35.345 13:01:27 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:35.345 13:01:27 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.345 13:01:27 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.345 13:01:27 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.345 ************************************ 00:05:35.345 START TEST app_repeat 00:05:35.345 ************************************ 00:05:35.345 13:01:27 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:35.345 13:01:27 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.345 13:01:27 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.345 13:01:27 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:35.345 13:01:27 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.345 13:01:27 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:35.345 13:01:27 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:35.345 13:01:27 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:35.345 13:01:27 event.app_repeat -- event/event.sh@19 -- # repeat_pid=63377 00:05:35.345 13:01:27 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:35.345 13:01:27 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:35.345 Process app_repeat pid: 63377 00:05:35.345 13:01:27 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 63377' 00:05:35.345 spdk_app_start Round 0 00:05:35.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:35.345 13:01:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:35.345 13:01:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:35.345 13:01:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63377 /var/tmp/spdk-nbd.sock 00:05:35.345 13:01:27 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 63377 ']' 00:05:35.345 13:01:27 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:35.345 13:01:27 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:35.345 13:01:27 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:35.345 13:01:27 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:35.345 13:01:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:35.345 [2024-07-25 13:01:27.315682] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:35.345 [2024-07-25 13:01:27.316084] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63377 ] 00:05:35.345 [2024-07-25 13:01:27.488608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:35.603 [2024-07-25 13:01:27.669152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.603 [2024-07-25 13:01:27.669170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.169 13:01:28 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:36.169 13:01:28 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:36.169 13:01:28 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.451 Malloc0 00:05:36.451 13:01:28 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.709 Malloc1 00:05:36.709 13:01:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.709 13:01:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.709 13:01:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.709 13:01:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:36.709 13:01:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.709 13:01:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:36.709 13:01:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.709 13:01:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.709 13:01:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.709 13:01:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:36.709 13:01:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.709 13:01:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:36.709 13:01:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:36.709 13:01:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:36.709 13:01:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.709 13:01:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:36.967 /dev/nbd0 00:05:36.967 13:01:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:36.967 13:01:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:36.967 13:01:29 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:36.967 13:01:29 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:36.967 13:01:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:36.967 13:01:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:36.967 13:01:29 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:36.967 13:01:29 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:36.967 13:01:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:36.967 13:01:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:36.967 13:01:29 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.967 1+0 records in 00:05:36.967 1+0 records out 00:05:36.967 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000531038 s, 7.7 MB/s 00:05:36.967 13:01:29 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.967 13:01:29 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:36.967 13:01:29 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.967 13:01:29 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:36.967 13:01:29 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:36.967 13:01:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.967 13:01:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.967 13:01:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:37.224 /dev/nbd1 00:05:37.481 13:01:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:37.481 13:01:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:37.481 13:01:29 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:37.481 13:01:29 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:37.481 13:01:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:37.481 13:01:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:37.481 13:01:29 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:37.481 13:01:29 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:37.481 13:01:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:37.481 13:01:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:37.481 13:01:29 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:37.481 1+0 records in 00:05:37.481 1+0 records out 00:05:37.481 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00052528 s, 7.8 MB/s 00:05:37.481 13:01:29 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:37.481 13:01:29 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:37.481 13:01:29 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:37.481 13:01:29 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:37.481 13:01:29 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:37.481 13:01:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:37.481 13:01:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.481 13:01:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.481 13:01:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.481 13:01:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.738 13:01:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:37.738 { 00:05:37.739 "nbd_device": "/dev/nbd0", 00:05:37.739 "bdev_name": "Malloc0" 00:05:37.739 }, 00:05:37.739 { 00:05:37.739 "nbd_device": "/dev/nbd1", 00:05:37.739 "bdev_name": "Malloc1" 00:05:37.739 } 00:05:37.739 ]' 00:05:37.739 13:01:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:37.739 { 00:05:37.739 "nbd_device": "/dev/nbd0", 00:05:37.739 "bdev_name": "Malloc0" 00:05:37.739 }, 00:05:37.739 { 00:05:37.739 "nbd_device": "/dev/nbd1", 00:05:37.739 "bdev_name": "Malloc1" 00:05:37.739 } 00:05:37.739 ]' 00:05:37.739 13:01:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.739 13:01:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:37.739 /dev/nbd1' 00:05:37.739 13:01:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:37.739 /dev/nbd1' 00:05:37.739 13:01:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.739 13:01:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:37.739 13:01:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:37.739 13:01:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:37.739 13:01:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:37.739 13:01:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:37.739 13:01:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.739 13:01:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:37.739 13:01:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:37.739 13:01:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:37.739 13:01:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:37.739 13:01:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:37.739 256+0 records in 00:05:37.739 256+0 records out 00:05:37.739 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00828386 s, 127 MB/s 00:05:37.739 13:01:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:37.739 13:01:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:37.739 256+0 records in 00:05:37.739 256+0 records out 00:05:37.739 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.03218 s, 32.6 MB/s 00:05:37.739 13:01:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:37.739 13:01:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:37.739 256+0 records in 00:05:37.739 256+0 records out 00:05:37.739 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0353326 s, 29.7 MB/s 00:05:37.739 13:01:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:37.739 13:01:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.739 13:01:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:37.739 13:01:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:37.739 13:01:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:37.739 13:01:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:37.739 13:01:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:37.739 13:01:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.739 13:01:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:37.739 13:01:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.739 13:01:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:37.739 13:01:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:37.739 13:01:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:37.739 13:01:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.739 13:01:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.739 13:01:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:37.739 13:01:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:37.739 13:01:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.739 13:01:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:37.997 13:01:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:37.997 13:01:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:37.997 13:01:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:37.997 13:01:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.997 13:01:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.997 13:01:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:37.997 13:01:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.997 13:01:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.997 13:01:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.997 13:01:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:38.255 13:01:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:38.255 13:01:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:38.255 13:01:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:38.255 13:01:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.255 13:01:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.255 13:01:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:38.255 13:01:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:38.255 13:01:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.255 13:01:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.255 13:01:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.255 13:01:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.820 13:01:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:38.820 13:01:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.820 13:01:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:38.820 13:01:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:38.820 13:01:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:38.820 13:01:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.820 13:01:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:38.820 13:01:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:38.820 13:01:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:38.820 13:01:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:38.820 13:01:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:38.820 13:01:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:38.820 13:01:30 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:39.078 13:01:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:40.449 [2024-07-25 13:01:32.332794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:40.449 [2024-07-25 13:01:32.510233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.449 [2024-07-25 13:01:32.510240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.707 [2024-07-25 13:01:32.677025] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:40.707 [2024-07-25 13:01:32.677093] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:42.078 spdk_app_start Round 1 00:05:42.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:42.078 13:01:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:42.078 13:01:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:42.078 13:01:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63377 /var/tmp/spdk-nbd.sock 00:05:42.078 13:01:34 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 63377 ']' 00:05:42.078 13:01:34 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:42.078 13:01:34 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:42.078 13:01:34 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:42.078 13:01:34 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:42.078 13:01:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:42.335 13:01:34 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:42.335 13:01:34 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:42.335 13:01:34 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:42.900 Malloc0 00:05:42.900 13:01:34 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.159 Malloc1 00:05:43.159 13:01:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.159 13:01:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.159 13:01:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.159 13:01:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:43.159 13:01:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.159 13:01:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:43.159 13:01:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.159 13:01:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.159 13:01:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.159 13:01:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:43.159 13:01:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.159 13:01:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:43.159 13:01:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:43.159 13:01:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:43.159 13:01:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.159 13:01:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:43.417 /dev/nbd0 00:05:43.417 13:01:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:43.417 13:01:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:43.417 13:01:35 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:43.417 13:01:35 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:43.417 13:01:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:43.417 13:01:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:43.417 13:01:35 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:43.417 13:01:35 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:43.417 13:01:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:43.417 13:01:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:43.417 13:01:35 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.417 1+0 records in 00:05:43.417 1+0 records out 00:05:43.418 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000531106 s, 7.7 MB/s 00:05:43.418 13:01:35 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:43.418 13:01:35 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:43.418 13:01:35 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:43.418 13:01:35 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:43.418 13:01:35 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:43.418 13:01:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.418 13:01:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.418 13:01:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:43.676 /dev/nbd1 00:05:43.676 13:01:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:43.676 13:01:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:43.676 13:01:35 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:43.676 13:01:35 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:43.676 13:01:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:43.676 13:01:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:43.676 13:01:35 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:43.676 13:01:35 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:43.676 13:01:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:43.676 13:01:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:43.676 13:01:35 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.676 1+0 records in 00:05:43.676 1+0 records out 00:05:43.676 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000544217 s, 7.5 MB/s 00:05:43.676 13:01:35 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:43.676 13:01:35 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:43.676 13:01:35 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:43.676 13:01:35 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:43.676 13:01:35 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:43.676 13:01:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.676 13:01:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.676 13:01:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.676 13:01:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.676 13:01:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.933 13:01:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:43.934 { 00:05:43.934 "nbd_device": "/dev/nbd0", 00:05:43.934 "bdev_name": "Malloc0" 00:05:43.934 }, 00:05:43.934 { 00:05:43.934 "nbd_device": "/dev/nbd1", 00:05:43.934 "bdev_name": "Malloc1" 00:05:43.934 } 00:05:43.934 ]' 00:05:43.934 13:01:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:43.934 { 00:05:43.934 "nbd_device": "/dev/nbd0", 00:05:43.934 "bdev_name": "Malloc0" 00:05:43.934 }, 00:05:43.934 { 00:05:43.934 "nbd_device": "/dev/nbd1", 00:05:43.934 "bdev_name": "Malloc1" 00:05:43.934 } 00:05:43.934 ]' 00:05:43.934 13:01:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.934 13:01:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:43.934 /dev/nbd1' 00:05:43.934 13:01:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:43.934 /dev/nbd1' 00:05:43.934 13:01:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.934 13:01:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:43.934 13:01:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:43.934 13:01:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:43.934 13:01:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:43.934 13:01:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:43.934 13:01:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.934 13:01:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.934 13:01:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:43.934 13:01:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:43.934 13:01:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:43.934 13:01:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:43.934 256+0 records in 00:05:43.934 256+0 records out 00:05:43.934 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0088877 s, 118 MB/s 00:05:43.934 13:01:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.934 13:01:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:43.934 256+0 records in 00:05:43.934 256+0 records out 00:05:43.934 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0278 s, 37.7 MB/s 00:05:43.934 13:01:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.934 13:01:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:43.934 256+0 records in 00:05:43.934 256+0 records out 00:05:43.934 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0353675 s, 29.6 MB/s 00:05:43.934 13:01:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:43.934 13:01:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.934 13:01:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.934 13:01:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:43.934 13:01:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:43.934 13:01:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:43.934 13:01:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:43.934 13:01:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.934 13:01:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:43.934 13:01:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.934 13:01:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:43.934 13:01:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:43.934 13:01:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:43.934 13:01:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.934 13:01:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.934 13:01:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:43.934 13:01:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:43.934 13:01:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.934 13:01:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:44.192 13:01:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:44.192 13:01:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:44.192 13:01:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:44.192 13:01:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.192 13:01:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.192 13:01:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:44.192 13:01:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:44.192 13:01:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.192 13:01:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.192 13:01:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:44.758 13:01:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:44.758 13:01:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:44.758 13:01:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:44.758 13:01:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.758 13:01:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.758 13:01:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:44.758 13:01:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:44.758 13:01:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.758 13:01:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.758 13:01:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.758 13:01:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.758 13:01:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:44.758 13:01:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.758 13:01:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:45.016 13:01:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:45.016 13:01:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:45.016 13:01:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.016 13:01:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:45.016 13:01:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:45.016 13:01:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:45.016 13:01:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:45.016 13:01:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:45.016 13:01:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:45.016 13:01:37 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:45.274 13:01:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:46.648 [2024-07-25 13:01:38.624840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.648 [2024-07-25 13:01:38.814815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.648 [2024-07-25 13:01:38.814821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.907 [2024-07-25 13:01:38.982971] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:46.907 [2024-07-25 13:01:38.983075] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:48.282 spdk_app_start Round 2 00:05:48.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:48.282 13:01:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:48.282 13:01:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:48.282 13:01:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63377 /var/tmp/spdk-nbd.sock 00:05:48.282 13:01:40 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 63377 ']' 00:05:48.282 13:01:40 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:48.282 13:01:40 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.282 13:01:40 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:48.282 13:01:40 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.282 13:01:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:48.849 13:01:40 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:48.849 13:01:40 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:48.849 13:01:40 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.107 Malloc0 00:05:49.107 13:01:41 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.364 Malloc1 00:05:49.364 13:01:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.364 13:01:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.365 13:01:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.365 13:01:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:49.365 13:01:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.365 13:01:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:49.365 13:01:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.365 13:01:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.365 13:01:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.365 13:01:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:49.365 13:01:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.365 13:01:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:49.365 13:01:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:49.365 13:01:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:49.365 13:01:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.365 13:01:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:49.622 /dev/nbd0 00:05:49.622 13:01:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:49.622 13:01:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:49.622 13:01:41 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:49.622 13:01:41 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:49.622 13:01:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:49.622 13:01:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:49.622 13:01:41 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:49.622 13:01:41 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:49.622 13:01:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:49.622 13:01:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:49.622 13:01:41 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.622 1+0 records in 00:05:49.622 1+0 records out 00:05:49.622 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000190382 s, 21.5 MB/s 00:05:49.622 13:01:41 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.622 13:01:41 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:49.622 13:01:41 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.622 13:01:41 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:49.622 13:01:41 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:49.622 13:01:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.622 13:01:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.622 13:01:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:49.881 /dev/nbd1 00:05:49.881 13:01:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:49.881 13:01:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:49.881 13:01:41 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:49.881 13:01:41 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:49.881 13:01:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:49.881 13:01:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:49.881 13:01:41 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:49.881 13:01:41 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:49.881 13:01:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:49.881 13:01:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:49.881 13:01:41 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.881 1+0 records in 00:05:49.881 1+0 records out 00:05:49.881 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000539022 s, 7.6 MB/s 00:05:49.881 13:01:41 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.881 13:01:41 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:49.881 13:01:41 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.881 13:01:41 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:49.881 13:01:41 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:49.881 13:01:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.881 13:01:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.881 13:01:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.881 13:01:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.881 13:01:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.139 13:01:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:50.139 { 00:05:50.139 "nbd_device": "/dev/nbd0", 00:05:50.139 "bdev_name": "Malloc0" 00:05:50.139 }, 00:05:50.139 { 00:05:50.139 "nbd_device": "/dev/nbd1", 00:05:50.139 "bdev_name": "Malloc1" 00:05:50.139 } 00:05:50.139 ]' 00:05:50.139 13:01:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:50.139 { 00:05:50.139 "nbd_device": "/dev/nbd0", 00:05:50.139 "bdev_name": "Malloc0" 00:05:50.139 }, 00:05:50.139 { 00:05:50.139 "nbd_device": "/dev/nbd1", 00:05:50.139 "bdev_name": "Malloc1" 00:05:50.139 } 00:05:50.139 ]' 00:05:50.139 13:01:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.139 13:01:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:50.139 /dev/nbd1' 00:05:50.139 13:01:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:50.139 /dev/nbd1' 00:05:50.139 13:01:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.139 13:01:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:50.139 13:01:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:50.139 13:01:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:50.139 13:01:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:50.139 13:01:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:50.139 13:01:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.139 13:01:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.139 13:01:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:50.139 13:01:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:50.139 13:01:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:50.139 13:01:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:50.139 256+0 records in 00:05:50.139 256+0 records out 00:05:50.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00710699 s, 148 MB/s 00:05:50.139 13:01:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.139 13:01:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:50.397 256+0 records in 00:05:50.397 256+0 records out 00:05:50.397 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0290506 s, 36.1 MB/s 00:05:50.397 13:01:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.397 13:01:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:50.397 256+0 records in 00:05:50.397 256+0 records out 00:05:50.397 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0309989 s, 33.8 MB/s 00:05:50.397 13:01:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:50.397 13:01:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.398 13:01:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.398 13:01:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:50.398 13:01:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:50.398 13:01:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:50.398 13:01:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:50.398 13:01:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.398 13:01:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:50.398 13:01:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.398 13:01:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:50.398 13:01:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:50.398 13:01:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:50.398 13:01:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.398 13:01:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.398 13:01:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:50.398 13:01:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:50.398 13:01:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.398 13:01:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:50.656 13:01:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:50.656 13:01:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:50.656 13:01:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:50.656 13:01:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.656 13:01:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.656 13:01:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:50.656 13:01:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.656 13:01:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.656 13:01:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.656 13:01:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:50.914 13:01:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:50.914 13:01:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:50.914 13:01:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:50.914 13:01:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.914 13:01:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.914 13:01:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:50.914 13:01:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.914 13:01:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.914 13:01:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.914 13:01:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.914 13:01:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.172 13:01:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:51.172 13:01:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:51.172 13:01:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.172 13:01:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:51.172 13:01:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:51.172 13:01:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.173 13:01:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:51.173 13:01:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:51.173 13:01:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:51.173 13:01:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:51.173 13:01:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:51.173 13:01:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:51.173 13:01:43 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:51.737 13:01:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:53.140 [2024-07-25 13:01:44.869888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:53.140 [2024-07-25 13:01:45.053717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.140 [2024-07-25 13:01:45.053729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.140 [2024-07-25 13:01:45.223177] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:53.140 [2024-07-25 13:01:45.223300] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:54.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.515 13:01:46 event.app_repeat -- event/event.sh@38 -- # waitforlisten 63377 /var/tmp/spdk-nbd.sock 00:05:54.515 13:01:46 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 63377 ']' 00:05:54.515 13:01:46 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.515 13:01:46 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:54.515 13:01:46 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.515 13:01:46 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:54.515 13:01:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.772 13:01:46 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.772 13:01:46 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:54.772 13:01:46 event.app_repeat -- event/event.sh@39 -- # killprocess 63377 00:05:54.772 13:01:46 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 63377 ']' 00:05:54.772 13:01:46 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 63377 00:05:54.772 13:01:46 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:54.773 13:01:46 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:54.773 13:01:46 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63377 00:05:54.773 killing process with pid 63377 00:05:54.773 13:01:46 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:54.773 13:01:46 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:54.773 13:01:46 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63377' 00:05:54.773 13:01:46 event.app_repeat -- common/autotest_common.sh@969 -- # kill 63377 00:05:54.773 13:01:46 event.app_repeat -- common/autotest_common.sh@974 -- # wait 63377 00:05:56.146 spdk_app_start is called in Round 0. 00:05:56.146 Shutdown signal received, stop current app iteration 00:05:56.146 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:05:56.146 spdk_app_start is called in Round 1. 00:05:56.146 Shutdown signal received, stop current app iteration 00:05:56.146 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:05:56.146 spdk_app_start is called in Round 2. 00:05:56.146 Shutdown signal received, stop current app iteration 00:05:56.146 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:05:56.146 spdk_app_start is called in Round 3. 00:05:56.146 Shutdown signal received, stop current app iteration 00:05:56.146 13:01:48 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:56.146 13:01:48 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:56.146 00:05:56.146 real 0m20.751s 00:05:56.146 user 0m44.766s 00:05:56.146 sys 0m2.697s 00:05:56.146 ************************************ 00:05:56.146 END TEST app_repeat 00:05:56.146 ************************************ 00:05:56.146 13:01:48 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.146 13:01:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:56.146 13:01:48 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:56.146 13:01:48 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:56.146 13:01:48 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.146 13:01:48 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.146 13:01:48 event -- common/autotest_common.sh@10 -- # set +x 00:05:56.146 ************************************ 00:05:56.146 START TEST cpu_locks 00:05:56.146 ************************************ 00:05:56.146 13:01:48 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:56.146 * Looking for test storage... 00:05:56.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:56.146 13:01:48 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:56.146 13:01:48 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:56.146 13:01:48 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:56.146 13:01:48 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:56.146 13:01:48 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.146 13:01:48 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.146 13:01:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.146 ************************************ 00:05:56.146 START TEST default_locks 00:05:56.146 ************************************ 00:05:56.146 13:01:48 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:56.146 13:01:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=63837 00:05:56.146 13:01:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 63837 00:05:56.146 13:01:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.146 13:01:48 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 63837 ']' 00:05:56.146 13:01:48 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.146 13:01:48 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:56.146 13:01:48 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.146 13:01:48 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:56.146 13:01:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.146 [2024-07-25 13:01:48.262134] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:56.146 [2024-07-25 13:01:48.262309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63837 ] 00:05:56.404 [2024-07-25 13:01:48.427668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.663 [2024-07-25 13:01:48.620318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.230 13:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:57.230 13:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:57.230 13:01:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 63837 00:05:57.230 13:01:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 63837 00:05:57.230 13:01:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.797 13:01:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 63837 00:05:57.797 13:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 63837 ']' 00:05:57.797 13:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 63837 00:05:57.797 13:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:57.797 13:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:57.797 13:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63837 00:05:57.798 killing process with pid 63837 00:05:57.798 13:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:57.798 13:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:57.798 13:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63837' 00:05:57.798 13:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 63837 00:05:57.798 13:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 63837 00:06:00.328 13:01:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 63837 00:06:00.328 13:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:00.328 13:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 63837 00:06:00.328 13:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:00.328 13:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.328 13:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:00.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.328 ERROR: process (pid: 63837) is no longer running 00:06:00.328 13:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.328 13:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 63837 00:06:00.328 13:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 63837 ']' 00:06:00.328 13:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.328 13:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.328 13:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.328 13:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.328 13:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.328 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (63837) - No such process 00:06:00.328 13:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.328 13:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:00.328 13:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:00.328 13:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:00.328 13:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:00.328 13:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:00.328 ************************************ 00:06:00.328 END TEST default_locks 00:06:00.328 ************************************ 00:06:00.328 13:01:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:00.328 13:01:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:00.328 13:01:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:00.328 13:01:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:00.328 00:06:00.328 real 0m3.900s 00:06:00.328 user 0m4.055s 00:06:00.328 sys 0m0.623s 00:06:00.328 13:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.328 13:01:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.328 13:01:52 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:00.328 13:01:52 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.328 13:01:52 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.328 13:01:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.328 ************************************ 00:06:00.328 START TEST default_locks_via_rpc 00:06:00.328 ************************************ 00:06:00.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.328 13:01:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:00.328 13:01:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=63907 00:06:00.328 13:01:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.328 13:01:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 63907 00:06:00.328 13:01:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 63907 ']' 00:06:00.328 13:01:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.328 13:01:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.328 13:01:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.328 13:01:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.328 13:01:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.328 [2024-07-25 13:01:52.221137] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:00.328 [2024-07-25 13:01:52.221355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63907 ] 00:06:00.328 [2024-07-25 13:01:52.395062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.587 [2024-07-25 13:01:52.582457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.152 13:01:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.152 13:01:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:01.152 13:01:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:01.152 13:01:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.152 13:01:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.152 13:01:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.152 13:01:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:01.152 13:01:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:01.152 13:01:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:01.152 13:01:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:01.152 13:01:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:01.152 13:01:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.152 13:01:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.152 13:01:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.152 13:01:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 63907 00:06:01.152 13:01:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 63907 00:06:01.152 13:01:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.716 13:01:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 63907 00:06:01.716 13:01:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 63907 ']' 00:06:01.716 13:01:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 63907 00:06:01.716 13:01:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:01.716 13:01:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:01.716 13:01:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63907 00:06:01.716 killing process with pid 63907 00:06:01.716 13:01:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:01.716 13:01:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:01.716 13:01:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63907' 00:06:01.716 13:01:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 63907 00:06:01.716 13:01:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 63907 00:06:04.272 00:06:04.272 real 0m3.773s 00:06:04.272 user 0m3.920s 00:06:04.272 sys 0m0.556s 00:06:04.272 ************************************ 00:06:04.272 END TEST default_locks_via_rpc 00:06:04.272 ************************************ 00:06:04.272 13:01:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.272 13:01:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.272 13:01:55 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:04.272 13:01:55 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.272 13:01:55 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.272 13:01:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.272 ************************************ 00:06:04.272 START TEST non_locking_app_on_locked_coremask 00:06:04.272 ************************************ 00:06:04.272 13:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:04.272 13:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=63976 00:06:04.272 13:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 63976 /var/tmp/spdk.sock 00:06:04.272 13:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.272 13:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 63976 ']' 00:06:04.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.272 13:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.272 13:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.272 13:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.272 13:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.272 13:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.272 [2024-07-25 13:01:56.039747] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:04.272 [2024-07-25 13:01:56.040232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63976 ] 00:06:04.272 [2024-07-25 13:01:56.208686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.272 [2024-07-25 13:01:56.397849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.209 13:01:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.209 13:01:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:05.209 13:01:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:05.209 13:01:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=63992 00:06:05.209 13:01:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 63992 /var/tmp/spdk2.sock 00:06:05.209 13:01:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 63992 ']' 00:06:05.209 13:01:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.209 13:01:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.209 13:01:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.209 13:01:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.209 13:01:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.209 [2024-07-25 13:01:57.222011] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:05.209 [2024-07-25 13:01:57.222434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63992 ] 00:06:05.467 [2024-07-25 13:01:57.402462] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:05.467 [2024-07-25 13:01:57.402524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.726 [2024-07-25 13:01:57.786562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.261 13:01:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.261 13:01:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:08.261 13:01:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 63976 00:06:08.261 13:01:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63976 00:06:08.261 13:01:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:08.828 13:02:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 63976 00:06:08.828 13:02:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 63976 ']' 00:06:08.828 13:02:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 63976 00:06:08.828 13:02:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:08.828 13:02:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:08.828 13:02:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63976 00:06:08.828 13:02:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:08.828 13:02:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:08.828 killing process with pid 63976 00:06:08.828 13:02:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63976' 00:06:08.828 13:02:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 63976 00:06:08.828 13:02:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 63976 00:06:13.018 13:02:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 63992 00:06:13.018 13:02:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 63992 ']' 00:06:13.018 13:02:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 63992 00:06:13.018 13:02:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:13.018 13:02:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.018 13:02:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63992 00:06:13.018 13:02:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:13.018 13:02:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:13.018 killing process with pid 63992 00:06:13.018 13:02:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63992' 00:06:13.018 13:02:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 63992 00:06:13.018 13:02:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 63992 00:06:14.922 ************************************ 00:06:14.922 END TEST non_locking_app_on_locked_coremask 00:06:14.922 ************************************ 00:06:14.922 00:06:14.922 real 0m11.126s 00:06:14.922 user 0m11.793s 00:06:14.922 sys 0m1.312s 00:06:14.922 13:02:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.922 13:02:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.922 13:02:07 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:14.922 13:02:07 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.922 13:02:07 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.922 13:02:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.922 ************************************ 00:06:14.922 START TEST locking_app_on_unlocked_coremask 00:06:14.922 ************************************ 00:06:14.922 13:02:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:14.922 13:02:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=64140 00:06:14.922 13:02:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 64140 /var/tmp/spdk.sock 00:06:14.922 13:02:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:14.922 13:02:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64140 ']' 00:06:14.922 13:02:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.922 13:02:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.922 13:02:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.922 13:02:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.923 13:02:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.180 [2024-07-25 13:02:07.224215] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:15.180 [2024-07-25 13:02:07.224410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64140 ] 00:06:15.440 [2024-07-25 13:02:07.397447] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:15.440 [2024-07-25 13:02:07.397519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.440 [2024-07-25 13:02:07.605459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.374 13:02:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:16.374 13:02:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:16.374 13:02:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=64156 00:06:16.374 13:02:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:16.374 13:02:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 64156 /var/tmp/spdk2.sock 00:06:16.374 13:02:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64156 ']' 00:06:16.374 13:02:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.374 13:02:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.374 13:02:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.374 13:02:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.374 13:02:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.374 [2024-07-25 13:02:08.429179] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:16.374 [2024-07-25 13:02:08.429553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64156 ] 00:06:16.633 [2024-07-25 13:02:08.605078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.891 [2024-07-25 13:02:09.012091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.424 13:02:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.424 13:02:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:19.424 13:02:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 64156 00:06:19.424 13:02:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64156 00:06:19.424 13:02:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.008 13:02:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 64140 00:06:20.008 13:02:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 64140 ']' 00:06:20.008 13:02:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 64140 00:06:20.008 13:02:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:20.008 13:02:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:20.008 13:02:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64140 00:06:20.008 killing process with pid 64140 00:06:20.008 13:02:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:20.008 13:02:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:20.008 13:02:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64140' 00:06:20.008 13:02:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 64140 00:06:20.008 13:02:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 64140 00:06:25.285 13:02:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 64156 00:06:25.285 13:02:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 64156 ']' 00:06:25.285 13:02:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 64156 00:06:25.285 13:02:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:25.285 13:02:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:25.285 13:02:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64156 00:06:25.285 killing process with pid 64156 00:06:25.285 13:02:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:25.285 13:02:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:25.285 13:02:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64156' 00:06:25.285 13:02:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 64156 00:06:25.285 13:02:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 64156 00:06:26.657 ************************************ 00:06:26.657 END TEST locking_app_on_unlocked_coremask 00:06:26.657 ************************************ 00:06:26.657 00:06:26.657 real 0m11.634s 00:06:26.657 user 0m12.357s 00:06:26.657 sys 0m1.282s 00:06:26.657 13:02:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.657 13:02:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.657 13:02:18 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:26.657 13:02:18 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.657 13:02:18 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.657 13:02:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.657 ************************************ 00:06:26.657 START TEST locking_app_on_locked_coremask 00:06:26.657 ************************************ 00:06:26.657 13:02:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:26.657 13:02:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=64310 00:06:26.657 13:02:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.657 13:02:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 64310 /var/tmp/spdk.sock 00:06:26.657 13:02:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64310 ']' 00:06:26.657 13:02:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.657 13:02:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:26.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.657 13:02:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.657 13:02:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:26.657 13:02:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.915 [2024-07-25 13:02:18.908476] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:26.915 [2024-07-25 13:02:18.908646] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64310 ] 00:06:26.915 [2024-07-25 13:02:19.080616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.173 [2024-07-25 13:02:19.320710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.116 13:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.116 13:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:28.116 13:02:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=64327 00:06:28.116 13:02:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 64327 /var/tmp/spdk2.sock 00:06:28.116 13:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:28.116 13:02:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:28.116 13:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 64327 /var/tmp/spdk2.sock 00:06:28.116 13:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:28.116 13:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.116 13:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:28.116 13:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.116 13:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 64327 /var/tmp/spdk2.sock 00:06:28.116 13:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64327 ']' 00:06:28.117 13:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.117 13:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.117 13:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.117 13:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.117 13:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.117 [2024-07-25 13:02:20.193485] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:28.117 [2024-07-25 13:02:20.193654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64327 ] 00:06:28.408 [2024-07-25 13:02:20.376136] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 64310 has claimed it. 00:06:28.408 [2024-07-25 13:02:20.376309] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:28.683 ERROR: process (pid: 64327) is no longer running 00:06:28.683 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (64327) - No such process 00:06:28.683 13:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.683 13:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:28.683 13:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:28.683 13:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:28.683 13:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:28.683 13:02:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:28.683 13:02:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 64310 00:06:28.683 13:02:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64310 00:06:28.683 13:02:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:29.250 13:02:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 64310 00:06:29.250 13:02:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 64310 ']' 00:06:29.250 13:02:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 64310 00:06:29.250 13:02:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:29.250 13:02:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:29.250 13:02:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64310 00:06:29.250 13:02:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:29.250 killing process with pid 64310 00:06:29.250 13:02:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:29.250 13:02:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64310' 00:06:29.250 13:02:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 64310 00:06:29.250 13:02:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 64310 00:06:31.778 00:06:31.778 real 0m4.772s 00:06:31.778 user 0m5.262s 00:06:31.778 sys 0m0.743s 00:06:31.778 ************************************ 00:06:31.778 END TEST locking_app_on_locked_coremask 00:06:31.778 ************************************ 00:06:31.778 13:02:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.778 13:02:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.778 13:02:23 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:31.778 13:02:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:31.778 13:02:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.778 13:02:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.778 ************************************ 00:06:31.778 START TEST locking_overlapped_coremask 00:06:31.778 ************************************ 00:06:31.778 13:02:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:31.778 13:02:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=64391 00:06:31.778 13:02:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 64391 /var/tmp/spdk.sock 00:06:31.778 13:02:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:31.778 13:02:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 64391 ']' 00:06:31.778 13:02:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.778 13:02:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.778 13:02:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.778 13:02:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.778 13:02:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.778 [2024-07-25 13:02:23.727486] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:31.778 [2024-07-25 13:02:23.727633] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64391 ] 00:06:31.778 [2024-07-25 13:02:23.900157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:32.036 [2024-07-25 13:02:24.156668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.036 [2024-07-25 13:02:24.156762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.036 [2024-07-25 13:02:24.156764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.970 13:02:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.970 13:02:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:32.970 13:02:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=64419 00:06:32.970 13:02:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 64419 /var/tmp/spdk2.sock 00:06:32.970 13:02:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:32.970 13:02:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 64419 /var/tmp/spdk2.sock 00:06:32.970 13:02:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:32.970 13:02:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:32.970 13:02:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.970 13:02:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:32.970 13:02:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.970 13:02:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 64419 /var/tmp/spdk2.sock 00:06:32.970 13:02:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 64419 ']' 00:06:32.970 13:02:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.970 13:02:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.970 13:02:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.971 13:02:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.971 13:02:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.971 [2024-07-25 13:02:25.038046] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:32.971 [2024-07-25 13:02:25.038238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64419 ] 00:06:33.228 [2024-07-25 13:02:25.224951] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64391 has claimed it. 00:06:33.229 [2024-07-25 13:02:25.225043] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:33.795 ERROR: process (pid: 64419) is no longer running 00:06:33.795 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (64419) - No such process 00:06:33.795 13:02:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:33.795 13:02:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:33.795 13:02:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:33.795 13:02:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:33.795 13:02:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:33.795 13:02:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:33.795 13:02:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:33.795 13:02:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:33.795 13:02:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:33.795 13:02:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:33.795 13:02:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 64391 00:06:33.795 13:02:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 64391 ']' 00:06:33.795 13:02:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 64391 00:06:33.795 13:02:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:33.795 13:02:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:33.795 13:02:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64391 00:06:33.795 killing process with pid 64391 00:06:33.795 13:02:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:33.795 13:02:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:33.795 13:02:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64391' 00:06:33.795 13:02:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 64391 00:06:33.795 13:02:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 64391 00:06:36.348 ************************************ 00:06:36.348 END TEST locking_overlapped_coremask 00:06:36.348 ************************************ 00:06:36.348 00:06:36.348 real 0m4.316s 00:06:36.348 user 0m11.297s 00:06:36.348 sys 0m0.588s 00:06:36.348 13:02:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.348 13:02:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.348 13:02:27 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:36.348 13:02:27 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.348 13:02:27 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.348 13:02:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.348 ************************************ 00:06:36.348 START TEST locking_overlapped_coremask_via_rpc 00:06:36.348 ************************************ 00:06:36.348 13:02:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:36.348 13:02:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=64479 00:06:36.348 13:02:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 64479 /var/tmp/spdk.sock 00:06:36.348 13:02:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 64479 ']' 00:06:36.348 13:02:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:36.348 13:02:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.348 13:02:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:36.348 13:02:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.349 13:02:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:36.349 13:02:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.349 [2024-07-25 13:02:28.096034] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:36.349 [2024-07-25 13:02:28.096202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64479 ] 00:06:36.349 [2024-07-25 13:02:28.262242] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:36.349 [2024-07-25 13:02:28.262306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:36.349 [2024-07-25 13:02:28.472310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.349 [2024-07-25 13:02:28.472417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.349 [2024-07-25 13:02:28.472428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.282 13:02:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.282 13:02:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:37.282 13:02:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=64502 00:06:37.282 13:02:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:37.282 13:02:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 64502 /var/tmp/spdk2.sock 00:06:37.282 13:02:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 64502 ']' 00:06:37.282 13:02:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.282 13:02:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.282 13:02:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.282 13:02:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.282 13:02:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.282 [2024-07-25 13:02:29.317845] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:37.282 [2024-07-25 13:02:29.317996] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64502 ] 00:06:37.541 [2024-07-25 13:02:29.493326] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:37.541 [2024-07-25 13:02:29.493409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.799 [2024-07-25 13:02:29.885557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:37.799 [2024-07-25 13:02:29.889716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.799 [2024-07-25 13:02:29.889735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:39.174 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:39.175 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:39.175 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:39.175 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.175 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.175 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.175 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:39.175 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:39.175 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:39.175 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:39.175 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.175 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:39.175 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.175 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:39.175 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.175 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.175 [2024-07-25 13:02:31.315404] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64479 has claimed it. 00:06:39.175 request: 00:06:39.175 { 00:06:39.175 "method": "framework_enable_cpumask_locks", 00:06:39.175 "req_id": 1 00:06:39.175 } 00:06:39.175 Got JSON-RPC error response 00:06:39.175 response: 00:06:39.175 { 00:06:39.175 "code": -32603, 00:06:39.175 "message": "Failed to claim CPU core: 2" 00:06:39.175 } 00:06:39.175 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:39.175 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:39.175 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:39.175 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:39.175 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:39.175 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 64479 /var/tmp/spdk.sock 00:06:39.175 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 64479 ']' 00:06:39.175 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.175 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.175 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.175 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.175 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.433 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:39.433 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:39.433 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 64502 /var/tmp/spdk2.sock 00:06:39.433 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 64502 ']' 00:06:39.433 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.433 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.433 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.433 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.433 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.011 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.011 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:40.011 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:40.011 ************************************ 00:06:40.011 END TEST locking_overlapped_coremask_via_rpc 00:06:40.011 ************************************ 00:06:40.011 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:40.011 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:40.011 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:40.011 00:06:40.011 real 0m3.962s 00:06:40.011 user 0m1.569s 00:06:40.011 sys 0m0.186s 00:06:40.011 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.011 13:02:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.011 13:02:31 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:40.011 13:02:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64479 ]] 00:06:40.011 13:02:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64479 00:06:40.011 13:02:31 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 64479 ']' 00:06:40.011 13:02:31 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 64479 00:06:40.011 13:02:31 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:40.011 13:02:31 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:40.011 13:02:31 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64479 00:06:40.011 13:02:32 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:40.011 killing process with pid 64479 00:06:40.011 13:02:32 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:40.011 13:02:32 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64479' 00:06:40.012 13:02:32 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 64479 00:06:40.012 13:02:32 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 64479 00:06:42.541 13:02:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64502 ]] 00:06:42.541 13:02:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64502 00:06:42.541 13:02:34 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 64502 ']' 00:06:42.541 13:02:34 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 64502 00:06:42.541 13:02:34 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:42.541 13:02:34 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:42.541 13:02:34 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64502 00:06:42.541 killing process with pid 64502 00:06:42.541 13:02:34 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:42.541 13:02:34 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:42.541 13:02:34 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64502' 00:06:42.541 13:02:34 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 64502 00:06:42.541 13:02:34 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 64502 00:06:44.462 13:02:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:44.462 Process with pid 64479 is not found 00:06:44.462 13:02:36 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:44.462 13:02:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64479 ]] 00:06:44.462 13:02:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64479 00:06:44.462 13:02:36 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 64479 ']' 00:06:44.462 13:02:36 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 64479 00:06:44.462 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (64479) - No such process 00:06:44.462 13:02:36 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 64479 is not found' 00:06:44.462 13:02:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64502 ]] 00:06:44.462 13:02:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64502 00:06:44.462 13:02:36 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 64502 ']' 00:06:44.462 13:02:36 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 64502 00:06:44.462 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (64502) - No such process 00:06:44.462 Process with pid 64502 is not found 00:06:44.462 13:02:36 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 64502 is not found' 00:06:44.462 13:02:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:44.462 00:06:44.462 real 0m48.445s 00:06:44.462 user 1m21.591s 00:06:44.462 sys 0m6.193s 00:06:44.462 13:02:36 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.462 ************************************ 00:06:44.462 13:02:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.462 END TEST cpu_locks 00:06:44.462 ************************************ 00:06:44.462 00:06:44.462 real 1m20.032s 00:06:44.462 user 2m22.795s 00:06:44.462 sys 0m9.840s 00:06:44.462 13:02:36 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.462 ************************************ 00:06:44.462 END TEST event 00:06:44.462 ************************************ 00:06:44.462 13:02:36 event -- common/autotest_common.sh@10 -- # set +x 00:06:44.462 13:02:36 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:44.462 13:02:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.462 13:02:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.462 13:02:36 -- common/autotest_common.sh@10 -- # set +x 00:06:44.462 ************************************ 00:06:44.462 START TEST thread 00:06:44.462 ************************************ 00:06:44.462 13:02:36 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:44.739 * Looking for test storage... 00:06:44.739 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:44.739 13:02:36 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:44.739 13:02:36 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:44.739 13:02:36 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.739 13:02:36 thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.739 ************************************ 00:06:44.739 START TEST thread_poller_perf 00:06:44.739 ************************************ 00:06:44.739 13:02:36 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:44.739 [2024-07-25 13:02:36.730890] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:44.739 [2024-07-25 13:02:36.731062] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64676 ] 00:06:44.739 [2024-07-25 13:02:36.904906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.000 [2024-07-25 13:02:37.146055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.000 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:46.900 ====================================== 00:06:46.900 busy:2216202468 (cyc) 00:06:46.900 total_run_count: 262000 00:06:46.900 tsc_hz: 2200000000 (cyc) 00:06:46.900 ====================================== 00:06:46.900 poller_cost: 8458 (cyc), 3844 (nsec) 00:06:46.900 00:06:46.900 real 0m1.891s 00:06:46.900 user 0m1.675s 00:06:46.900 sys 0m0.105s 00:06:46.900 13:02:38 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:46.900 13:02:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:46.900 ************************************ 00:06:46.900 END TEST thread_poller_perf 00:06:46.900 ************************************ 00:06:46.900 13:02:38 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:46.900 13:02:38 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:46.900 13:02:38 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.900 13:02:38 thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.900 ************************************ 00:06:46.900 START TEST thread_poller_perf 00:06:46.900 ************************************ 00:06:46.900 13:02:38 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:46.900 [2024-07-25 13:02:38.675495] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:46.900 [2024-07-25 13:02:38.675656] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64713 ] 00:06:46.900 [2024-07-25 13:02:38.855576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.158 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:47.158 [2024-07-25 13:02:39.096723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.532 ====================================== 00:06:48.532 busy:2205116618 (cyc) 00:06:48.532 total_run_count: 3417000 00:06:48.532 tsc_hz: 2200000000 (cyc) 00:06:48.532 ====================================== 00:06:48.532 poller_cost: 645 (cyc), 293 (nsec) 00:06:48.532 00:06:48.532 real 0m1.861s 00:06:48.532 user 0m1.631s 00:06:48.532 sys 0m0.115s 00:06:48.532 13:02:40 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.532 13:02:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:48.532 ************************************ 00:06:48.532 END TEST thread_poller_perf 00:06:48.532 ************************************ 00:06:48.532 13:02:40 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:48.532 00:06:48.532 real 0m3.929s 00:06:48.532 user 0m3.367s 00:06:48.532 sys 0m0.330s 00:06:48.532 13:02:40 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.532 13:02:40 thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.532 ************************************ 00:06:48.532 END TEST thread 00:06:48.532 ************************************ 00:06:48.532 13:02:40 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:06:48.532 13:02:40 -- spdk/autotest.sh@189 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:48.532 13:02:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:48.532 13:02:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.532 13:02:40 -- common/autotest_common.sh@10 -- # set +x 00:06:48.532 ************************************ 00:06:48.532 START TEST app_cmdline 00:06:48.532 ************************************ 00:06:48.532 13:02:40 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:48.532 * Looking for test storage... 00:06:48.532 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:48.532 13:02:40 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:48.532 13:02:40 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=64794 00:06:48.532 13:02:40 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 64794 00:06:48.532 13:02:40 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:48.532 13:02:40 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 64794 ']' 00:06:48.532 13:02:40 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.532 13:02:40 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.532 13:02:40 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.532 13:02:40 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.532 13:02:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:48.790 [2024-07-25 13:02:40.769474] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:48.790 [2024-07-25 13:02:40.769656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64794 ] 00:06:48.790 [2024-07-25 13:02:40.946773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.048 [2024-07-25 13:02:41.177500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.981 13:02:41 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:49.981 13:02:41 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:49.981 13:02:41 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:49.981 { 00:06:49.981 "version": "SPDK v24.09-pre git sha1 704257090", 00:06:49.981 "fields": { 00:06:49.981 "major": 24, 00:06:49.981 "minor": 9, 00:06:49.981 "patch": 0, 00:06:49.981 "suffix": "-pre", 00:06:49.981 "commit": "704257090" 00:06:49.981 } 00:06:49.981 } 00:06:50.239 13:02:42 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:50.239 13:02:42 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:50.239 13:02:42 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:50.239 13:02:42 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:50.239 13:02:42 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:50.239 13:02:42 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.239 13:02:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:50.239 13:02:42 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:50.239 13:02:42 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:50.239 13:02:42 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.239 13:02:42 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:50.239 13:02:42 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:50.239 13:02:42 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:50.239 13:02:42 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:50.239 13:02:42 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:50.239 13:02:42 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:50.239 13:02:42 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.239 13:02:42 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:50.239 13:02:42 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.239 13:02:42 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:50.239 13:02:42 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.239 13:02:42 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:50.239 13:02:42 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:50.239 13:02:42 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:50.497 request: 00:06:50.497 { 00:06:50.497 "method": "env_dpdk_get_mem_stats", 00:06:50.497 "req_id": 1 00:06:50.497 } 00:06:50.497 Got JSON-RPC error response 00:06:50.497 response: 00:06:50.497 { 00:06:50.497 "code": -32601, 00:06:50.497 "message": "Method not found" 00:06:50.497 } 00:06:50.497 13:02:42 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:50.497 13:02:42 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:50.497 13:02:42 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:50.497 13:02:42 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:50.497 13:02:42 app_cmdline -- app/cmdline.sh@1 -- # killprocess 64794 00:06:50.497 13:02:42 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 64794 ']' 00:06:50.497 13:02:42 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 64794 00:06:50.497 13:02:42 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:50.497 13:02:42 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:50.497 13:02:42 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64794 00:06:50.497 13:02:42 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:50.498 killing process with pid 64794 00:06:50.498 13:02:42 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:50.498 13:02:42 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64794' 00:06:50.498 13:02:42 app_cmdline -- common/autotest_common.sh@969 -- # kill 64794 00:06:50.498 13:02:42 app_cmdline -- common/autotest_common.sh@974 -- # wait 64794 00:06:53.024 00:06:53.024 real 0m4.058s 00:06:53.024 user 0m4.591s 00:06:53.024 sys 0m0.512s 00:06:53.024 13:02:44 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.024 ************************************ 00:06:53.024 13:02:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:53.024 END TEST app_cmdline 00:06:53.024 ************************************ 00:06:53.024 13:02:44 -- spdk/autotest.sh@190 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:53.024 13:02:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:53.024 13:02:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.024 13:02:44 -- common/autotest_common.sh@10 -- # set +x 00:06:53.024 ************************************ 00:06:53.024 START TEST version 00:06:53.024 ************************************ 00:06:53.024 13:02:44 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:53.024 * Looking for test storage... 00:06:53.024 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:53.024 13:02:44 version -- app/version.sh@17 -- # get_header_version major 00:06:53.024 13:02:44 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:53.024 13:02:44 version -- app/version.sh@14 -- # cut -f2 00:06:53.024 13:02:44 version -- app/version.sh@14 -- # tr -d '"' 00:06:53.024 13:02:44 version -- app/version.sh@17 -- # major=24 00:06:53.024 13:02:44 version -- app/version.sh@18 -- # get_header_version minor 00:06:53.024 13:02:44 version -- app/version.sh@14 -- # cut -f2 00:06:53.024 13:02:44 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:53.024 13:02:44 version -- app/version.sh@14 -- # tr -d '"' 00:06:53.024 13:02:44 version -- app/version.sh@18 -- # minor=9 00:06:53.024 13:02:44 version -- app/version.sh@19 -- # get_header_version patch 00:06:53.024 13:02:44 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:53.024 13:02:44 version -- app/version.sh@14 -- # cut -f2 00:06:53.024 13:02:44 version -- app/version.sh@14 -- # tr -d '"' 00:06:53.024 13:02:44 version -- app/version.sh@19 -- # patch=0 00:06:53.024 13:02:44 version -- app/version.sh@20 -- # get_header_version suffix 00:06:53.024 13:02:44 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:53.024 13:02:44 version -- app/version.sh@14 -- # cut -f2 00:06:53.024 13:02:44 version -- app/version.sh@14 -- # tr -d '"' 00:06:53.024 13:02:44 version -- app/version.sh@20 -- # suffix=-pre 00:06:53.024 13:02:44 version -- app/version.sh@22 -- # version=24.9 00:06:53.024 13:02:44 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:53.024 13:02:44 version -- app/version.sh@28 -- # version=24.9rc0 00:06:53.024 13:02:44 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:53.024 13:02:44 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:53.024 13:02:44 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:53.024 13:02:44 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:53.024 00:06:53.024 real 0m0.150s 00:06:53.024 user 0m0.087s 00:06:53.024 sys 0m0.092s 00:06:53.024 13:02:44 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.024 13:02:44 version -- common/autotest_common.sh@10 -- # set +x 00:06:53.024 ************************************ 00:06:53.024 END TEST version 00:06:53.024 ************************************ 00:06:53.024 13:02:44 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:06:53.024 13:02:44 -- spdk/autotest.sh@202 -- # uname -s 00:06:53.024 13:02:44 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:06:53.024 13:02:44 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:06:53.024 13:02:44 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:06:53.024 13:02:44 -- spdk/autotest.sh@215 -- # '[' 1 -eq 1 ']' 00:06:53.024 13:02:44 -- spdk/autotest.sh@216 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:53.024 13:02:44 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:53.024 13:02:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.024 13:02:44 -- common/autotest_common.sh@10 -- # set +x 00:06:53.024 ************************************ 00:06:53.024 START TEST blockdev_nvme 00:06:53.024 ************************************ 00:06:53.024 13:02:44 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:53.024 * Looking for test storage... 00:06:53.024 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:53.024 13:02:44 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:53.024 13:02:44 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:53.024 13:02:44 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:53.024 13:02:44 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:53.024 13:02:44 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:53.024 13:02:44 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:53.024 13:02:44 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:53.024 13:02:44 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:53.024 13:02:44 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:53.024 13:02:44 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:53.024 13:02:44 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:53.024 13:02:44 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:53.024 13:02:44 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:53.024 13:02:44 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:53.024 13:02:44 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:53.024 13:02:44 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:53.025 13:02:44 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:53.025 13:02:44 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:53.025 13:02:44 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:53.025 13:02:44 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:53.025 13:02:44 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:53.025 13:02:44 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:53.025 13:02:44 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:53.025 13:02:44 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:53.025 13:02:44 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=64961 00:06:53.025 13:02:44 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:53.025 13:02:44 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:53.025 13:02:44 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 64961 00:06:53.025 13:02:44 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 64961 ']' 00:06:53.025 13:02:44 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.025 13:02:44 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.025 13:02:44 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.025 13:02:44 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.025 13:02:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:53.025 [2024-07-25 13:02:45.068126] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:53.025 [2024-07-25 13:02:45.068348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64961 ] 00:06:53.283 [2024-07-25 13:02:45.236152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.283 [2024-07-25 13:02:45.452311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.219 13:02:46 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:54.219 13:02:46 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:06:54.219 13:02:46 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:54.219 13:02:46 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:54.219 13:02:46 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:54.219 13:02:46 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:54.219 13:02:46 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:54.219 13:02:46 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:54.220 13:02:46 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.220 13:02:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:54.478 13:02:46 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.478 13:02:46 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:54.478 13:02:46 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.478 13:02:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:54.478 13:02:46 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.478 13:02:46 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:54.478 13:02:46 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:54.478 13:02:46 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.478 13:02:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:54.478 13:02:46 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.478 13:02:46 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:54.478 13:02:46 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.478 13:02:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:54.478 13:02:46 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.478 13:02:46 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:54.478 13:02:46 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.478 13:02:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:54.478 13:02:46 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.478 13:02:46 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:54.478 13:02:46 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:54.478 13:02:46 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.478 13:02:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:54.478 13:02:46 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:54.478 13:02:46 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.478 13:02:46 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:54.478 13:02:46 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:54.479 13:02:46 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "af6bf5b0-7420-431a-870f-f001253e7b1f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "af6bf5b0-7420-431a-870f-f001253e7b1f",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "b45e597a-cc45-4946-a65c-c3a942345e79"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "b45e597a-cc45-4946-a65c-c3a942345e79",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "819bdf7f-d2dc-4ced-a9c3-113ef7e813e7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "819bdf7f-d2dc-4ced-a9c3-113ef7e813e7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "80b37560-b65e-47e9-a1cc-d4175d072c23"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "80b37560-b65e-47e9-a1cc-d4175d072c23",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "0850da97-5481-4cea-8744-bf33e71709f4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0850da97-5481-4cea-8744-bf33e71709f4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "7eafe74c-7cb1-43e6-a82f-d97fb2607e3f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "7eafe74c-7cb1-43e6-a82f-d97fb2607e3f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:54.737 13:02:46 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:54.737 13:02:46 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:54.737 13:02:46 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:54.737 13:02:46 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 64961 00:06:54.737 13:02:46 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 64961 ']' 00:06:54.737 13:02:46 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 64961 00:06:54.737 13:02:46 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:06:54.737 13:02:46 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:54.737 13:02:46 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64961 00:06:54.737 13:02:46 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:54.737 13:02:46 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:54.737 killing process with pid 64961 00:06:54.737 13:02:46 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64961' 00:06:54.737 13:02:46 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 64961 00:06:54.737 13:02:46 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 64961 00:06:57.265 13:02:48 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:57.266 13:02:48 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:57.266 13:02:48 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:57.266 13:02:48 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.266 13:02:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:57.266 ************************************ 00:06:57.266 START TEST bdev_hello_world 00:06:57.266 ************************************ 00:06:57.266 13:02:48 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:57.266 [2024-07-25 13:02:48.957373] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:57.266 [2024-07-25 13:02:48.957543] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65056 ] 00:06:57.266 [2024-07-25 13:02:49.130144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.266 [2024-07-25 13:02:49.315905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.831 [2024-07-25 13:02:49.929175] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:57.831 [2024-07-25 13:02:49.929235] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:57.831 [2024-07-25 13:02:49.929267] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:57.831 [2024-07-25 13:02:49.932245] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:57.831 [2024-07-25 13:02:49.932749] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:57.831 [2024-07-25 13:02:49.932785] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:57.831 [2024-07-25 13:02:49.932939] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:57.831 00:06:57.831 [2024-07-25 13:02:49.932976] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:59.205 00:06:59.205 real 0m2.218s 00:06:59.205 user 0m1.885s 00:06:59.205 sys 0m0.221s 00:06:59.205 13:02:51 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.205 13:02:51 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:59.205 ************************************ 00:06:59.205 END TEST bdev_hello_world 00:06:59.205 ************************************ 00:06:59.205 13:02:51 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:59.205 13:02:51 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:59.205 13:02:51 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.205 13:02:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:59.205 ************************************ 00:06:59.205 START TEST bdev_bounds 00:06:59.205 ************************************ 00:06:59.205 13:02:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:06:59.205 13:02:51 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=65098 00:06:59.205 13:02:51 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:59.205 13:02:51 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:59.205 Process bdevio pid: 65098 00:06:59.205 13:02:51 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 65098' 00:06:59.205 13:02:51 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 65098 00:06:59.205 13:02:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 65098 ']' 00:06:59.205 13:02:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.205 13:02:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:59.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.205 13:02:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.205 13:02:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:59.205 13:02:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:59.205 [2024-07-25 13:02:51.231585] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:59.205 [2024-07-25 13:02:51.231730] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65098 ] 00:06:59.463 [2024-07-25 13:02:51.395288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:59.463 [2024-07-25 13:02:51.606068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.463 [2024-07-25 13:02:51.606183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.463 [2024-07-25 13:02:51.606199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.397 13:02:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:00.397 13:02:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:00.398 13:02:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:00.398 I/O targets: 00:07:00.398 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:00.398 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:00.398 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:00.398 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:00.398 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:00.398 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:00.398 00:07:00.398 00:07:00.398 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.398 http://cunit.sourceforge.net/ 00:07:00.398 00:07:00.398 00:07:00.398 Suite: bdevio tests on: Nvme3n1 00:07:00.398 Test: blockdev write read block ...passed 00:07:00.398 Test: blockdev write zeroes read block ...passed 00:07:00.398 Test: blockdev write zeroes read no split ...passed 00:07:00.398 Test: blockdev write zeroes read split ...passed 00:07:00.398 Test: blockdev write zeroes read split partial ...passed 00:07:00.398 Test: blockdev reset ...[2024-07-25 13:02:52.416770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:00.398 [2024-07-25 13:02:52.420556] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:00.398 passed 00:07:00.398 Test: blockdev write read 8 blocks ...passed 00:07:00.398 Test: blockdev write read size > 128k ...passed 00:07:00.398 Test: blockdev write read invalid size ...passed 00:07:00.398 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.398 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.398 Test: blockdev write read max offset ...passed 00:07:00.398 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.398 Test: blockdev writev readv 8 blocks ...passed 00:07:00.398 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.398 Test: blockdev writev readv block ...passed 00:07:00.398 Test: blockdev writev readv size > 128k ...passed 00:07:00.398 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.398 Test: blockdev comparev and writev ...[2024-07-25 13:02:52.428045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x276e0a000 len:0x1000 00:07:00.398 [2024-07-25 13:02:52.428272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.398 passed 00:07:00.398 Test: blockdev nvme passthru rw ...passed 00:07:00.398 Test: blockdev nvme passthru vendor specific ...[2024-07-25 13:02:52.429136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:00.398 [2024-07-25 13:02:52.429281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:00.398 passed 00:07:00.398 Test: blockdev nvme admin passthru ...passed 00:07:00.398 Test: blockdev copy ...passed 00:07:00.398 Suite: bdevio tests on: Nvme2n3 00:07:00.398 Test: blockdev write read block ...passed 00:07:00.398 Test: blockdev write zeroes read block ...passed 00:07:00.398 Test: blockdev write zeroes read no split ...passed 00:07:00.398 Test: blockdev write zeroes read split ...passed 00:07:00.398 Test: blockdev write zeroes read split partial ...passed 00:07:00.398 Test: blockdev reset ...[2024-07-25 13:02:52.495450] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:00.398 [2024-07-25 13:02:52.499609] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:00.398 passed 00:07:00.398 Test: blockdev write read 8 blocks ...passed 00:07:00.398 Test: blockdev write read size > 128k ...passed 00:07:00.398 Test: blockdev write read invalid size ...passed 00:07:00.398 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.398 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.398 Test: blockdev write read max offset ...passed 00:07:00.398 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.398 Test: blockdev writev readv 8 blocks ...passed 00:07:00.398 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.398 Test: blockdev writev readv block ...passed 00:07:00.398 Test: blockdev writev readv size > 128k ...passed 00:07:00.398 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.398 Test: blockdev comparev and writev ...[2024-07-25 13:02:52.507219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x258c04000 len:0x1000 00:07:00.398 [2024-07-25 13:02:52.507408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.398 passed 00:07:00.398 Test: blockdev nvme passthru rw ...passed 00:07:00.398 Test: blockdev nvme passthru vendor specific ...[2024-07-25 13:02:52.508251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:00.398 [2024-07-25 13:02:52.508381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:00.398 passed 00:07:00.398 Test: blockdev nvme admin passthru ...passed 00:07:00.398 Test: blockdev copy ...passed 00:07:00.398 Suite: bdevio tests on: Nvme2n2 00:07:00.398 Test: blockdev write read block ...passed 00:07:00.398 Test: blockdev write zeroes read block ...passed 00:07:00.398 Test: blockdev write zeroes read no split ...passed 00:07:00.398 Test: blockdev write zeroes read split ...passed 00:07:00.398 Test: blockdev write zeroes read split partial ...passed 00:07:00.398 Test: blockdev reset ...[2024-07-25 13:02:52.576862] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:00.398 [2024-07-25 13:02:52.580998] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:00.398 passed 00:07:00.398 Test: blockdev write read 8 blocks ...passed 00:07:00.398 Test: blockdev write read size > 128k ...passed 00:07:00.398 Test: blockdev write read invalid size ...passed 00:07:00.398 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.398 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.398 Test: blockdev write read max offset ...passed 00:07:00.398 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.398 Test: blockdev writev readv 8 blocks ...passed 00:07:00.398 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.398 Test: blockdev writev readv block ...passed 00:07:00.398 Test: blockdev writev readv size > 128k ...passed 00:07:00.656 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.656 Test: blockdev comparev and writev ...[2024-07-25 13:02:52.588769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x295a3a000 len:0x1000 00:07:00.656 passed 00:07:00.656 Test: blockdev nvme passthru rw ...[2024-07-25 13:02:52.589119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.656 passed 00:07:00.656 Test: blockdev nvme passthru vendor specific ...[2024-07-25 13:02:52.590099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:00.656 passed 00:07:00.656 Test: blockdev nvme admin passthru ...[2024-07-25 13:02:52.590341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:00.656 passed 00:07:00.656 Test: blockdev copy ...passed 00:07:00.656 Suite: bdevio tests on: Nvme2n1 00:07:00.656 Test: blockdev write read block ...passed 00:07:00.656 Test: blockdev write zeroes read block ...passed 00:07:00.656 Test: blockdev write zeroes read no split ...passed 00:07:00.656 Test: blockdev write zeroes read split ...passed 00:07:00.656 Test: blockdev write zeroes read split partial ...passed 00:07:00.656 Test: blockdev reset ...[2024-07-25 13:02:52.661200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:00.656 [2024-07-25 13:02:52.665493] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:00.656 passed 00:07:00.656 Test: blockdev write read 8 blocks ...passed 00:07:00.656 Test: blockdev write read size > 128k ...passed 00:07:00.656 Test: blockdev write read invalid size ...passed 00:07:00.656 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.656 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.656 Test: blockdev write read max offset ...passed 00:07:00.656 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.656 Test: blockdev writev readv 8 blocks ...passed 00:07:00.656 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.656 Test: blockdev writev readv block ...passed 00:07:00.656 Test: blockdev writev readv size > 128k ...passed 00:07:00.656 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.656 Test: blockdev comparev and writev ...[2024-07-25 13:02:52.673813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x295a34000 len:0x1000 00:07:00.656 passed 00:07:00.656 Test: blockdev nvme passthru rw ...[2024-07-25 13:02:52.674185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.656 passed 00:07:00.656 Test: blockdev nvme passthru vendor specific ...[2024-07-25 13:02:52.675002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:00.656 passed 00:07:00.656 Test: blockdev nvme admin passthru ...[2024-07-25 13:02:52.675257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:00.656 passed 00:07:00.656 Test: blockdev copy ...passed 00:07:00.656 Suite: bdevio tests on: Nvme1n1 00:07:00.656 Test: blockdev write read block ...passed 00:07:00.656 Test: blockdev write zeroes read block ...passed 00:07:00.656 Test: blockdev write zeroes read no split ...passed 00:07:00.656 Test: blockdev write zeroes read split ...passed 00:07:00.656 Test: blockdev write zeroes read split partial ...passed 00:07:00.656 Test: blockdev reset ...[2024-07-25 13:02:52.756853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:00.656 [2024-07-25 13:02:52.760606] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:00.656 passed 00:07:00.656 Test: blockdev write read 8 blocks ...passed 00:07:00.656 Test: blockdev write read size > 128k ...passed 00:07:00.656 Test: blockdev write read invalid size ...passed 00:07:00.656 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.656 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.656 Test: blockdev write read max offset ...passed 00:07:00.656 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.657 Test: blockdev writev readv 8 blocks ...passed 00:07:00.657 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.657 Test: blockdev writev readv block ...passed 00:07:00.657 Test: blockdev writev readv size > 128k ...passed 00:07:00.657 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.657 Test: blockdev comparev and writev ...[2024-07-25 13:02:52.770169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x295a30000 len:0x1000 00:07:00.657 passed 00:07:00.657 Test: blockdev nvme passthru rw ...[2024-07-25 13:02:52.770504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.657 passed 00:07:00.657 Test: blockdev nvme passthru vendor specific ...[2024-07-25 13:02:52.771412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:00.657 passed 00:07:00.657 Test: blockdev nvme admin passthru ...[2024-07-25 13:02:52.771657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:00.657 passed 00:07:00.657 Test: blockdev copy ...passed 00:07:00.657 Suite: bdevio tests on: Nvme0n1 00:07:00.657 Test: blockdev write read block ...passed 00:07:00.657 Test: blockdev write zeroes read block ...passed 00:07:00.657 Test: blockdev write zeroes read no split ...passed 00:07:00.657 Test: blockdev write zeroes read split ...passed 00:07:00.657 Test: blockdev write zeroes read split partial ...passed 00:07:00.657 Test: blockdev reset ...[2024-07-25 13:02:52.837970] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:00.657 [2024-07-25 13:02:52.841689] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:00.657 passed 00:07:00.657 Test: blockdev write read 8 blocks ...passed 00:07:00.657 Test: blockdev write read size > 128k ...passed 00:07:00.657 Test: blockdev write read invalid size ...passed 00:07:00.657 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.657 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.657 Test: blockdev write read max offset ...passed 00:07:00.657 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.657 Test: blockdev writev readv 8 blocks ...passed 00:07:00.915 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.915 Test: blockdev writev readv block ...passed 00:07:00.915 Test: blockdev writev readv size > 128k ...passed 00:07:00.915 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.915 Test: blockdev comparev and writev ...passed 00:07:00.915 Test: blockdev nvme passthru rw ...[2024-07-25 13:02:52.849019] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:00.915 separate metadata which is not supported yet. 00:07:00.915 passed 00:07:00.915 Test: blockdev nvme passthru vendor specific ...[2024-07-25 13:02:52.849707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:00.915 passed 00:07:00.915 Test: blockdev nvme admin passthru ...[2024-07-25 13:02:52.849976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:00.915 passed 00:07:00.915 Test: blockdev copy ...passed 00:07:00.915 00:07:00.915 Run Summary: Type Total Ran Passed Failed Inactive 00:07:00.915 suites 6 6 n/a 0 0 00:07:00.915 tests 138 138 138 0 0 00:07:00.915 asserts 893 893 893 0 n/a 00:07:00.915 00:07:00.915 Elapsed time = 1.359 seconds 00:07:00.915 0 00:07:00.915 13:02:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 65098 00:07:00.915 13:02:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 65098 ']' 00:07:00.915 13:02:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 65098 00:07:00.915 13:02:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:00.915 13:02:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:00.915 13:02:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65098 00:07:00.915 13:02:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:00.915 13:02:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:00.915 13:02:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65098' 00:07:00.915 killing process with pid 65098 00:07:00.915 13:02:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 65098 00:07:00.915 13:02:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 65098 00:07:01.847 13:02:53 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:01.847 00:07:01.847 real 0m2.786s 00:07:01.847 user 0m6.834s 00:07:01.847 sys 0m0.344s 00:07:01.847 13:02:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.847 13:02:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:01.847 ************************************ 00:07:01.847 END TEST bdev_bounds 00:07:01.847 ************************************ 00:07:01.847 13:02:53 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:01.847 13:02:53 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:01.847 13:02:53 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.847 13:02:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.847 ************************************ 00:07:01.847 START TEST bdev_nbd 00:07:01.847 ************************************ 00:07:01.847 13:02:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:01.847 13:02:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:01.847 13:02:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:01.847 13:02:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.847 13:02:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:01.847 13:02:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:01.847 13:02:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:01.847 13:02:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:07:01.847 13:02:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:01.847 13:02:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:01.847 13:02:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:01.847 13:02:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:07:01.847 13:02:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:01.847 13:02:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:01.847 13:02:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:01.847 13:02:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:01.847 13:02:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=65163 00:07:01.847 13:02:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:01.847 13:02:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:01.847 13:02:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 65163 /var/tmp/spdk-nbd.sock 00:07:01.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:01.847 13:02:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 65163 ']' 00:07:01.847 13:02:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:01.847 13:02:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:01.847 13:02:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:01.847 13:02:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:01.847 13:02:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:02.104 [2024-07-25 13:02:54.053566] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:02.104 [2024-07-25 13:02:54.053736] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:02.104 [2024-07-25 13:02:54.219156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.362 [2024-07-25 13:02:54.411437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.927 13:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.928 13:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:07:02.928 13:02:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:02.928 13:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.928 13:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:02.928 13:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:02.928 13:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:02.928 13:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.928 13:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:02.928 13:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:02.928 13:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:02.928 13:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:02.928 13:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:02.928 13:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:02.928 13:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:03.185 13:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:03.185 13:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:03.185 13:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:03.444 13:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:03.444 13:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:03.444 13:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:03.444 13:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:03.444 13:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:03.444 13:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:03.444 13:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:03.444 13:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:03.444 13:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.444 1+0 records in 00:07:03.444 1+0 records out 00:07:03.444 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000702162 s, 5.8 MB/s 00:07:03.444 13:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.444 13:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:03.444 13:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.444 13:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:03.444 13:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:03.444 13:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:03.444 13:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:03.444 13:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:03.703 13:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:03.703 13:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:03.703 13:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:03.703 13:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:03.703 13:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:03.703 13:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:03.703 13:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:03.703 13:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:03.703 13:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:03.703 13:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:03.703 13:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:03.703 13:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.703 1+0 records in 00:07:03.703 1+0 records out 00:07:03.703 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000556903 s, 7.4 MB/s 00:07:03.703 13:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.703 13:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:03.703 13:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.703 13:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:03.703 13:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:03.703 13:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:03.703 13:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:03.703 13:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:03.961 13:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:03.961 13:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:03.961 13:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:03.961 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:03.961 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:03.961 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:03.961 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:03.961 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:03.961 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:03.961 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:03.961 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:03.962 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.962 1+0 records in 00:07:03.962 1+0 records out 00:07:03.962 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000767218 s, 5.3 MB/s 00:07:03.962 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.962 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:03.962 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.962 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:03.962 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:03.962 13:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:03.962 13:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:03.962 13:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:04.221 13:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:04.221 13:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:04.221 13:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:04.221 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:07:04.221 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:04.221 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:04.221 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:04.221 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:07:04.221 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:04.221 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:04.221 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:04.221 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.221 1+0 records in 00:07:04.221 1+0 records out 00:07:04.221 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000773889 s, 5.3 MB/s 00:07:04.221 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.221 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:04.221 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.221 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:04.221 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:04.221 13:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:04.221 13:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:04.221 13:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:04.479 13:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:04.479 13:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:04.479 13:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:04.479 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:07:04.479 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:04.479 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:04.479 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:04.479 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:07:04.479 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:04.479 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:04.479 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:04.479 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.737 1+0 records in 00:07:04.737 1+0 records out 00:07:04.737 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00128233 s, 3.2 MB/s 00:07:04.738 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.738 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:04.738 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.738 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:04.738 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:04.738 13:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:04.738 13:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:04.738 13:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:04.997 13:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:04.997 13:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:04.997 13:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:04.997 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:07:04.997 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:04.997 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:04.997 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:04.997 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:07:04.997 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:04.997 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:04.997 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:04.997 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.997 1+0 records in 00:07:04.997 1+0 records out 00:07:04.997 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00438984 s, 933 kB/s 00:07:04.997 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.997 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:04.997 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.997 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:04.997 13:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:04.997 13:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:04.997 13:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:04.997 13:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.256 13:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:05.256 { 00:07:05.256 "nbd_device": "/dev/nbd0", 00:07:05.256 "bdev_name": "Nvme0n1" 00:07:05.256 }, 00:07:05.256 { 00:07:05.256 "nbd_device": "/dev/nbd1", 00:07:05.256 "bdev_name": "Nvme1n1" 00:07:05.256 }, 00:07:05.256 { 00:07:05.256 "nbd_device": "/dev/nbd2", 00:07:05.256 "bdev_name": "Nvme2n1" 00:07:05.256 }, 00:07:05.256 { 00:07:05.256 "nbd_device": "/dev/nbd3", 00:07:05.256 "bdev_name": "Nvme2n2" 00:07:05.256 }, 00:07:05.256 { 00:07:05.256 "nbd_device": "/dev/nbd4", 00:07:05.256 "bdev_name": "Nvme2n3" 00:07:05.256 }, 00:07:05.256 { 00:07:05.256 "nbd_device": "/dev/nbd5", 00:07:05.256 "bdev_name": "Nvme3n1" 00:07:05.256 } 00:07:05.256 ]' 00:07:05.256 13:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:05.256 13:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:05.256 { 00:07:05.256 "nbd_device": "/dev/nbd0", 00:07:05.256 "bdev_name": "Nvme0n1" 00:07:05.256 }, 00:07:05.256 { 00:07:05.256 "nbd_device": "/dev/nbd1", 00:07:05.256 "bdev_name": "Nvme1n1" 00:07:05.256 }, 00:07:05.256 { 00:07:05.256 "nbd_device": "/dev/nbd2", 00:07:05.256 "bdev_name": "Nvme2n1" 00:07:05.256 }, 00:07:05.256 { 00:07:05.256 "nbd_device": "/dev/nbd3", 00:07:05.256 "bdev_name": "Nvme2n2" 00:07:05.256 }, 00:07:05.256 { 00:07:05.256 "nbd_device": "/dev/nbd4", 00:07:05.256 "bdev_name": "Nvme2n3" 00:07:05.256 }, 00:07:05.256 { 00:07:05.256 "nbd_device": "/dev/nbd5", 00:07:05.256 "bdev_name": "Nvme3n1" 00:07:05.256 } 00:07:05.256 ]' 00:07:05.256 13:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:05.256 13:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:05.256 13:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.256 13:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:05.256 13:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:05.256 13:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:05.256 13:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.256 13:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:05.514 13:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:05.514 13:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:05.514 13:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:05.514 13:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.514 13:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.514 13:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:05.515 13:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.515 13:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.515 13:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.515 13:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:05.773 13:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:05.773 13:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:05.773 13:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:05.773 13:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.773 13:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.773 13:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:05.773 13:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.773 13:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.773 13:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.773 13:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:06.031 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:06.031 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:06.031 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:06.031 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.031 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.031 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:06.031 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.031 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.031 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.031 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:06.290 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:06.290 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:06.290 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:06.290 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.290 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.290 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:06.290 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.290 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.290 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.290 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:06.548 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:06.548 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:06.548 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:06.548 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.548 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.548 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:06.548 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.548 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.548 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.548 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:06.806 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:06.806 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:06.806 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:06.806 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.806 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.806 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:06.806 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.806 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.806 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:06.806 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.806 13:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:07.064 13:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:07.064 13:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:07.064 13:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:07.064 13:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:07.064 13:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:07.064 13:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:07.064 13:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:07.064 13:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:07.064 13:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:07.064 13:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:07.064 13:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:07.064 13:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:07.064 13:02:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:07.064 13:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.064 13:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:07.064 13:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:07.064 13:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:07.064 13:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:07.064 13:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:07.064 13:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.064 13:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:07.064 13:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:07.064 13:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:07.064 13:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:07.064 13:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:07.064 13:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:07.064 13:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:07.064 13:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:07.320 /dev/nbd0 00:07:07.320 13:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:07.320 13:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:07.320 13:02:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:07.320 13:02:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:07.320 13:02:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:07.320 13:02:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:07.321 13:02:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:07.578 13:02:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:07.578 13:02:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:07.578 13:02:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:07.579 13:02:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:07.579 1+0 records in 00:07:07.579 1+0 records out 00:07:07.579 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000485905 s, 8.4 MB/s 00:07:07.579 13:02:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.579 13:02:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:07.579 13:02:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.579 13:02:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:07.579 13:02:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:07.579 13:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.579 13:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:07.579 13:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:07.877 /dev/nbd1 00:07:07.877 13:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:07.877 13:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:07.877 13:02:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:07.877 13:02:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:07.877 13:02:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:07.877 13:02:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:07.877 13:02:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:07.877 13:02:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:07.877 13:02:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:07.877 13:02:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:07.877 13:02:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:07.877 1+0 records in 00:07:07.877 1+0 records out 00:07:07.877 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00050802 s, 8.1 MB/s 00:07:07.877 13:02:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.877 13:02:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:07.877 13:02:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.877 13:02:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:07.877 13:02:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:07.877 13:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.877 13:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:07.877 13:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:08.135 /dev/nbd10 00:07:08.135 13:03:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:08.135 13:03:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:08.135 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:08.135 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:08.135 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:08.135 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:08.135 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:08.135 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:08.135 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:08.135 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:08.136 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.136 1+0 records in 00:07:08.136 1+0 records out 00:07:08.136 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00078972 s, 5.2 MB/s 00:07:08.136 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.136 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:08.136 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.136 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:08.136 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:08.136 13:03:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.136 13:03:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:08.136 13:03:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:08.394 /dev/nbd11 00:07:08.394 13:03:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:08.394 13:03:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:08.394 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:08.394 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:08.394 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:08.394 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:08.394 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:08.394 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:08.394 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:08.394 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:08.394 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.394 1+0 records in 00:07:08.394 1+0 records out 00:07:08.394 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000580354 s, 7.1 MB/s 00:07:08.394 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.394 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:08.394 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.394 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:08.394 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:08.394 13:03:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.394 13:03:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:08.394 13:03:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:08.652 /dev/nbd12 00:07:08.652 13:03:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:08.652 13:03:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:08.652 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:08.652 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:08.652 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:08.652 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:08.652 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:08.652 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:08.652 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:08.652 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:08.652 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.652 1+0 records in 00:07:08.652 1+0 records out 00:07:08.652 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000737513 s, 5.6 MB/s 00:07:08.652 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.652 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:08.652 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.652 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:08.652 13:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:08.652 13:03:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.652 13:03:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:08.652 13:03:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:08.910 /dev/nbd13 00:07:08.910 13:03:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:08.910 13:03:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:08.910 13:03:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:08.910 13:03:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:08.910 13:03:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:08.910 13:03:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:08.910 13:03:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:08.910 13:03:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:08.910 13:03:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:08.910 13:03:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:08.910 13:03:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.910 1+0 records in 00:07:08.910 1+0 records out 00:07:08.910 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0016785 s, 2.4 MB/s 00:07:08.910 13:03:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.910 13:03:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:09.168 13:03:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.168 13:03:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:09.168 13:03:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:09.168 13:03:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.168 13:03:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:09.168 13:03:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:09.168 13:03:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.168 13:03:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:09.426 13:03:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:09.426 { 00:07:09.426 "nbd_device": "/dev/nbd0", 00:07:09.426 "bdev_name": "Nvme0n1" 00:07:09.426 }, 00:07:09.426 { 00:07:09.426 "nbd_device": "/dev/nbd1", 00:07:09.426 "bdev_name": "Nvme1n1" 00:07:09.426 }, 00:07:09.426 { 00:07:09.426 "nbd_device": "/dev/nbd10", 00:07:09.426 "bdev_name": "Nvme2n1" 00:07:09.426 }, 00:07:09.426 { 00:07:09.426 "nbd_device": "/dev/nbd11", 00:07:09.426 "bdev_name": "Nvme2n2" 00:07:09.426 }, 00:07:09.426 { 00:07:09.426 "nbd_device": "/dev/nbd12", 00:07:09.426 "bdev_name": "Nvme2n3" 00:07:09.426 }, 00:07:09.426 { 00:07:09.426 "nbd_device": "/dev/nbd13", 00:07:09.426 "bdev_name": "Nvme3n1" 00:07:09.426 } 00:07:09.426 ]' 00:07:09.426 13:03:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:09.426 { 00:07:09.426 "nbd_device": "/dev/nbd0", 00:07:09.426 "bdev_name": "Nvme0n1" 00:07:09.426 }, 00:07:09.426 { 00:07:09.426 "nbd_device": "/dev/nbd1", 00:07:09.426 "bdev_name": "Nvme1n1" 00:07:09.426 }, 00:07:09.426 { 00:07:09.426 "nbd_device": "/dev/nbd10", 00:07:09.426 "bdev_name": "Nvme2n1" 00:07:09.426 }, 00:07:09.426 { 00:07:09.426 "nbd_device": "/dev/nbd11", 00:07:09.426 "bdev_name": "Nvme2n2" 00:07:09.426 }, 00:07:09.426 { 00:07:09.426 "nbd_device": "/dev/nbd12", 00:07:09.426 "bdev_name": "Nvme2n3" 00:07:09.426 }, 00:07:09.426 { 00:07:09.426 "nbd_device": "/dev/nbd13", 00:07:09.426 "bdev_name": "Nvme3n1" 00:07:09.426 } 00:07:09.426 ]' 00:07:09.426 13:03:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:09.427 13:03:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:09.427 /dev/nbd1 00:07:09.427 /dev/nbd10 00:07:09.427 /dev/nbd11 00:07:09.427 /dev/nbd12 00:07:09.427 /dev/nbd13' 00:07:09.427 13:03:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:09.427 /dev/nbd1 00:07:09.427 /dev/nbd10 00:07:09.427 /dev/nbd11 00:07:09.427 /dev/nbd12 00:07:09.427 /dev/nbd13' 00:07:09.427 13:03:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:09.427 13:03:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:09.427 13:03:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:09.427 13:03:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:09.427 13:03:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:09.427 13:03:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:09.427 13:03:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:09.427 13:03:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:09.427 13:03:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:09.427 13:03:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:09.427 13:03:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:09.427 13:03:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:09.427 256+0 records in 00:07:09.427 256+0 records out 00:07:09.427 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00641219 s, 164 MB/s 00:07:09.427 13:03:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.427 13:03:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:09.685 256+0 records in 00:07:09.685 256+0 records out 00:07:09.685 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151536 s, 6.9 MB/s 00:07:09.685 13:03:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.685 13:03:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:09.685 256+0 records in 00:07:09.685 256+0 records out 00:07:09.685 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134404 s, 7.8 MB/s 00:07:09.685 13:03:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.685 13:03:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:09.942 256+0 records in 00:07:09.942 256+0 records out 00:07:09.942 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.141451 s, 7.4 MB/s 00:07:09.942 13:03:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.942 13:03:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:09.942 256+0 records in 00:07:09.942 256+0 records out 00:07:09.942 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15787 s, 6.6 MB/s 00:07:09.942 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.942 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:10.204 256+0 records in 00:07:10.204 256+0 records out 00:07:10.204 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131798 s, 8.0 MB/s 00:07:10.204 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.204 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:10.204 256+0 records in 00:07:10.204 256+0 records out 00:07:10.204 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124439 s, 8.4 MB/s 00:07:10.204 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:10.204 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:10.204 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:10.204 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:10.204 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:10.204 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:10.204 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:10.204 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.204 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:10.204 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.204 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:10.204 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.204 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:10.204 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.204 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:10.204 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.204 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:10.463 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.463 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:10.463 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:10.463 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:10.463 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.463 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:10.463 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:10.463 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:10.463 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.463 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:10.463 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:10.720 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:10.720 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:10.720 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.720 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.720 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:10.720 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:10.720 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.720 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.720 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:10.978 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:10.978 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:10.978 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:10.978 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.978 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.978 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:10.978 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:10.978 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.978 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.978 13:03:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:11.236 13:03:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:11.236 13:03:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:11.236 13:03:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:11.236 13:03:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.236 13:03:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.236 13:03:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:11.236 13:03:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.236 13:03:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.236 13:03:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.236 13:03:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:11.494 13:03:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:11.494 13:03:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:11.494 13:03:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:11.494 13:03:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.494 13:03:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.494 13:03:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:11.494 13:03:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.494 13:03:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.494 13:03:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.494 13:03:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:11.753 13:03:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:11.753 13:03:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:11.753 13:03:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:11.753 13:03:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.753 13:03:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.753 13:03:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:11.753 13:03:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.753 13:03:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.753 13:03:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.753 13:03:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:12.011 13:03:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:12.011 13:03:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:12.011 13:03:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:12.011 13:03:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.011 13:03:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.011 13:03:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:12.011 13:03:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.011 13:03:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.011 13:03:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:12.011 13:03:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.011 13:03:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:12.576 13:03:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:12.576 13:03:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:12.576 13:03:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:12.576 13:03:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:12.576 13:03:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:12.576 13:03:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:12.576 13:03:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:12.576 13:03:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:12.576 13:03:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:12.576 13:03:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:12.576 13:03:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:12.576 13:03:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:12.576 13:03:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:12.576 13:03:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.576 13:03:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:12.576 13:03:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:07:12.576 13:03:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:07:12.576 13:03:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:12.834 malloc_lvol_verify 00:07:12.834 13:03:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:13.422 08732da2-e390-40a0-a845-644dc11da730 00:07:13.422 13:03:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:13.422 f04c40b8-0ed8-4cc7-9d1e-17243d1c8da9 00:07:13.423 13:03:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:13.680 /dev/nbd0 00:07:13.939 13:03:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:07:13.939 mke2fs 1.46.5 (30-Dec-2021) 00:07:13.939 Discarding device blocks: 0/4096 done 00:07:13.939 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:13.939 00:07:13.939 Allocating group tables: 0/1 done 00:07:13.939 Writing inode tables: 0/1 done 00:07:13.939 Creating journal (1024 blocks): done 00:07:13.939 Writing superblocks and filesystem accounting information: 0/1 done 00:07:13.939 00:07:13.939 13:03:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:07:13.939 13:03:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:13.939 13:03:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.939 13:03:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:13.939 13:03:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:13.939 13:03:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:13.939 13:03:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.939 13:03:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:14.197 13:03:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:14.197 13:03:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:14.197 13:03:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:14.197 13:03:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.197 13:03:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.197 13:03:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:14.197 13:03:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.197 13:03:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.197 13:03:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:07:14.197 13:03:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:07:14.197 13:03:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 65163 00:07:14.197 13:03:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 65163 ']' 00:07:14.197 13:03:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 65163 00:07:14.197 13:03:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:07:14.197 13:03:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:14.197 13:03:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65163 00:07:14.197 13:03:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:14.197 13:03:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:14.197 killing process with pid 65163 00:07:14.197 13:03:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65163' 00:07:14.197 13:03:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 65163 00:07:14.197 13:03:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 65163 00:07:15.570 13:03:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:15.570 00:07:15.570 real 0m13.508s 00:07:15.570 user 0m19.513s 00:07:15.570 sys 0m4.107s 00:07:15.570 13:03:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.570 13:03:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:15.570 ************************************ 00:07:15.570 END TEST bdev_nbd 00:07:15.570 ************************************ 00:07:15.570 13:03:07 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:15.570 13:03:07 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:07:15.570 skipping fio tests on NVMe due to multi-ns failures. 00:07:15.570 13:03:07 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:15.570 13:03:07 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:15.570 13:03:07 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:15.570 13:03:07 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:15.570 13:03:07 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.570 13:03:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:15.570 ************************************ 00:07:15.570 START TEST bdev_verify 00:07:15.570 ************************************ 00:07:15.570 13:03:07 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:15.570 [2024-07-25 13:03:07.600784] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:15.570 [2024-07-25 13:03:07.600951] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65576 ] 00:07:15.828 [2024-07-25 13:03:07.763649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:15.828 [2024-07-25 13:03:07.969252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.828 [2024-07-25 13:03:07.969291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.761 Running I/O for 5 seconds... 00:07:22.058 00:07:22.058 Latency(us) 00:07:22.058 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:22.058 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.058 Verification LBA range: start 0x0 length 0xbd0bd 00:07:22.058 Nvme0n1 : 5.09 1484.23 5.80 0.00 0.00 86017.63 17158.52 79596.45 00:07:22.058 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.058 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:22.058 Nvme0n1 : 5.04 1522.44 5.95 0.00 0.00 83671.75 17039.36 83409.45 00:07:22.058 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.058 Verification LBA range: start 0x0 length 0xa0000 00:07:22.058 Nvme1n1 : 5.09 1483.60 5.80 0.00 0.00 85859.47 17754.30 74830.20 00:07:22.058 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.058 Verification LBA range: start 0xa0000 length 0xa0000 00:07:22.058 Nvme1n1 : 5.08 1525.48 5.96 0.00 0.00 83368.69 10187.87 79119.83 00:07:22.058 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.058 Verification LBA range: start 0x0 length 0x80000 00:07:22.058 Nvme2n1 : 5.09 1482.99 5.79 0.00 0.00 85710.13 17515.99 71970.44 00:07:22.058 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.058 Verification LBA range: start 0x80000 length 0x80000 00:07:22.058 Nvme2n1 : 5.08 1524.85 5.96 0.00 0.00 83215.93 10366.60 78643.20 00:07:22.058 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.058 Verification LBA range: start 0x0 length 0x80000 00:07:22.058 Nvme2n2 : 5.10 1482.02 5.79 0.00 0.00 85577.61 18588.39 73876.95 00:07:22.058 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.058 Verification LBA range: start 0x80000 length 0x80000 00:07:22.058 Nvme2n2 : 5.09 1533.44 5.99 0.00 0.00 82814.65 9651.67 80073.08 00:07:22.058 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.058 Verification LBA range: start 0x0 length 0x80000 00:07:22.058 Nvme2n3 : 5.10 1481.00 5.79 0.00 0.00 85462.72 17635.14 76260.07 00:07:22.058 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.058 Verification LBA range: start 0x80000 length 0x80000 00:07:22.058 Nvme2n3 : 5.10 1532.37 5.99 0.00 0.00 82684.34 11021.96 81502.95 00:07:22.058 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.058 Verification LBA range: start 0x0 length 0x20000 00:07:22.058 Nvme3n1 : 5.10 1480.10 5.78 0.00 0.00 85336.40 11021.96 79596.45 00:07:22.058 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.058 Verification LBA range: start 0x20000 length 0x20000 00:07:22.058 Nvme3n1 : 5.10 1531.34 5.98 0.00 0.00 82557.78 12868.89 83886.08 00:07:22.058 =================================================================================================================== 00:07:22.058 Total : 18063.86 70.56 0.00 0.00 84337.06 9651.67 83886.08 00:07:23.432 00:07:23.432 real 0m7.756s 00:07:23.432 user 0m14.136s 00:07:23.432 sys 0m0.261s 00:07:23.432 13:03:15 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.432 ************************************ 00:07:23.432 13:03:15 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:23.432 END TEST bdev_verify 00:07:23.432 ************************************ 00:07:23.432 13:03:15 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:23.432 13:03:15 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:23.432 13:03:15 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.432 13:03:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:23.432 ************************************ 00:07:23.432 START TEST bdev_verify_big_io 00:07:23.432 ************************************ 00:07:23.432 13:03:15 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:23.432 [2024-07-25 13:03:15.409707] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:23.432 [2024-07-25 13:03:15.409866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65679 ] 00:07:23.432 [2024-07-25 13:03:15.578246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:23.689 [2024-07-25 13:03:15.808945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.689 [2024-07-25 13:03:15.808961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.623 Running I/O for 5 seconds... 00:07:31.202 00:07:31.202 Latency(us) 00:07:31.202 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:31.202 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:31.202 Verification LBA range: start 0x0 length 0xbd0b 00:07:31.202 Nvme0n1 : 5.74 108.91 6.81 0.00 0.00 1105433.13 20375.74 1692973.61 00:07:31.202 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:31.203 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:31.203 Nvme0n1 : 5.74 117.16 7.32 0.00 0.00 1034059.99 20256.58 1113397.06 00:07:31.203 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:31.203 Verification LBA range: start 0x0 length 0xa000 00:07:31.203 Nvme1n1 : 5.74 119.47 7.47 0.00 0.00 997717.42 40989.79 1197283.14 00:07:31.203 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:31.203 Verification LBA range: start 0xa000 length 0xa000 00:07:31.203 Nvme1n1 : 5.75 122.48 7.66 0.00 0.00 979393.88 58386.62 930372.89 00:07:31.203 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:31.203 Verification LBA range: start 0x0 length 0x8000 00:07:31.203 Nvme2n1 : 5.92 116.33 7.27 0.00 0.00 978181.05 85792.58 1746355.67 00:07:31.203 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:31.203 Verification LBA range: start 0x8000 length 0x8000 00:07:31.203 Nvme2n1 : 5.83 126.26 7.89 0.00 0.00 921759.95 77689.95 892242.85 00:07:31.203 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:31.203 Verification LBA range: start 0x0 length 0x8000 00:07:31.203 Nvme2n2 : 5.93 126.89 7.93 0.00 0.00 874411.54 90558.84 1288795.23 00:07:31.203 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:31.203 Verification LBA range: start 0x8000 length 0x8000 00:07:31.203 Nvme2n2 : 5.83 126.17 7.89 0.00 0.00 890399.81 80549.70 937998.89 00:07:31.203 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:31.203 Verification LBA range: start 0x0 length 0x8000 00:07:31.203 Nvme2n3 : 5.98 136.21 8.51 0.00 0.00 789263.79 14239.19 1319299.26 00:07:31.203 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:31.203 Verification LBA range: start 0x8000 length 0x8000 00:07:31.203 Nvme2n3 : 5.92 134.85 8.43 0.00 0.00 809113.62 46232.67 983754.94 00:07:31.203 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:31.203 Verification LBA range: start 0x0 length 0x2000 00:07:31.203 Nvme3n1 : 6.07 160.80 10.05 0.00 0.00 650692.38 1407.53 1860745.77 00:07:31.203 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:31.203 Verification LBA range: start 0x2000 length 0x2000 00:07:31.203 Nvme3n1 : 5.97 150.15 9.38 0.00 0.00 706639.66 3991.74 991380.95 00:07:31.203 =================================================================================================================== 00:07:31.203 Total : 1545.69 96.61 0.00 0.00 878615.16 1407.53 1860745.77 00:07:32.589 00:07:32.589 real 0m9.298s 00:07:32.589 user 0m17.034s 00:07:32.589 sys 0m0.271s 00:07:32.589 13:03:24 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:32.589 13:03:24 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:32.589 ************************************ 00:07:32.589 END TEST bdev_verify_big_io 00:07:32.589 ************************************ 00:07:32.589 13:03:24 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:32.589 13:03:24 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:32.589 13:03:24 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.589 13:03:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:32.589 ************************************ 00:07:32.589 START TEST bdev_write_zeroes 00:07:32.589 ************************************ 00:07:32.589 13:03:24 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:32.589 [2024-07-25 13:03:24.753614] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:32.589 [2024-07-25 13:03:24.753774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65794 ] 00:07:32.847 [2024-07-25 13:03:24.917720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.105 [2024-07-25 13:03:25.103087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.671 Running I/O for 1 seconds... 00:07:34.626 00:07:34.626 Latency(us) 00:07:34.626 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:34.626 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:34.626 Nvme0n1 : 1.03 5032.29 19.66 0.00 0.00 25350.50 12332.68 52190.49 00:07:34.626 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:34.626 Nvme1n1 : 1.03 5019.81 19.61 0.00 0.00 25367.40 13583.83 44802.79 00:07:34.626 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:34.626 Nvme2n1 : 1.03 5011.94 19.58 0.00 0.00 25325.46 13643.40 45994.36 00:07:34.626 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:34.626 Nvme2n2 : 1.04 5005.70 19.55 0.00 0.00 25250.83 11319.85 45041.11 00:07:34.626 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:34.626 Nvme2n3 : 1.04 4999.58 19.53 0.00 0.00 25228.48 10843.23 44564.48 00:07:34.626 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:34.626 Nvme3n1 : 1.04 4993.30 19.51 0.00 0.00 25208.16 8996.31 42896.29 00:07:34.626 =================================================================================================================== 00:07:34.626 Total : 30062.61 117.43 0.00 0.00 25288.47 8996.31 52190.49 00:07:36.003 00:07:36.003 real 0m3.334s 00:07:36.003 user 0m2.968s 00:07:36.003 sys 0m0.234s 00:07:36.003 13:03:27 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.003 ************************************ 00:07:36.003 END TEST bdev_write_zeroes 00:07:36.003 13:03:28 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:36.003 ************************************ 00:07:36.003 13:03:28 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:36.003 13:03:28 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:36.003 13:03:28 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.003 13:03:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:36.003 ************************************ 00:07:36.003 START TEST bdev_json_nonenclosed 00:07:36.003 ************************************ 00:07:36.003 13:03:28 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:36.003 [2024-07-25 13:03:28.184687] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:36.003 [2024-07-25 13:03:28.185699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65853 ] 00:07:36.261 [2024-07-25 13:03:28.373828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.518 [2024-07-25 13:03:28.637653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.518 [2024-07-25 13:03:28.637778] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:36.519 [2024-07-25 13:03:28.637814] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:36.519 [2024-07-25 13:03:28.637831] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:37.085 00:07:37.085 real 0m1.027s 00:07:37.085 user 0m0.750s 00:07:37.085 sys 0m0.164s 00:07:37.085 13:03:29 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.085 ************************************ 00:07:37.085 END TEST bdev_json_nonenclosed 00:07:37.085 ************************************ 00:07:37.085 13:03:29 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:37.085 13:03:29 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:37.085 13:03:29 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:37.085 13:03:29 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.085 13:03:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:37.085 ************************************ 00:07:37.085 START TEST bdev_json_nonarray 00:07:37.085 ************************************ 00:07:37.085 13:03:29 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:37.085 [2024-07-25 13:03:29.208701] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:37.085 [2024-07-25 13:03:29.208866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65884 ] 00:07:37.343 [2024-07-25 13:03:29.369997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.600 [2024-07-25 13:03:29.560453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.600 [2024-07-25 13:03:29.560599] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:37.600 [2024-07-25 13:03:29.560635] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:37.600 [2024-07-25 13:03:29.560653] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:37.857 00:07:37.857 real 0m0.873s 00:07:37.857 user 0m0.638s 00:07:37.857 sys 0m0.127s 00:07:37.857 13:03:29 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.857 ************************************ 00:07:37.858 END TEST bdev_json_nonarray 00:07:37.858 13:03:29 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:37.858 ************************************ 00:07:37.858 13:03:30 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:07:37.858 13:03:30 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:07:37.858 13:03:30 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:07:37.858 13:03:30 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:37.858 13:03:30 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:07:37.858 13:03:30 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:37.858 13:03:30 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:37.858 13:03:30 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:37.858 13:03:30 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:37.858 13:03:30 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:37.858 13:03:30 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:37.858 00:07:37.858 real 0m45.163s 00:07:37.858 user 1m8.093s 00:07:37.858 sys 0m6.486s 00:07:37.858 13:03:30 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.858 13:03:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:37.858 ************************************ 00:07:37.858 END TEST blockdev_nvme 00:07:37.858 ************************************ 00:07:38.116 13:03:30 -- spdk/autotest.sh@217 -- # uname -s 00:07:38.116 13:03:30 -- spdk/autotest.sh@217 -- # [[ Linux == Linux ]] 00:07:38.116 13:03:30 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:38.116 13:03:30 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:38.116 13:03:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.116 13:03:30 -- common/autotest_common.sh@10 -- # set +x 00:07:38.116 ************************************ 00:07:38.116 START TEST blockdev_nvme_gpt 00:07:38.116 ************************************ 00:07:38.116 13:03:30 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:38.116 * Looking for test storage... 00:07:38.116 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:38.116 13:03:30 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:38.116 13:03:30 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:38.116 13:03:30 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:38.116 13:03:30 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:38.116 13:03:30 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:38.116 13:03:30 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:38.116 13:03:30 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:38.116 13:03:30 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:38.116 13:03:30 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:38.116 13:03:30 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:38.116 13:03:30 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:38.116 13:03:30 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:38.116 13:03:30 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:07:38.116 13:03:30 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:38.116 13:03:30 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:38.116 13:03:30 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:07:38.116 13:03:30 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:38.116 13:03:30 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:07:38.116 13:03:30 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:38.116 13:03:30 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:38.116 13:03:30 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:38.116 13:03:30 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:07:38.116 13:03:30 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:07:38.116 13:03:30 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:38.116 13:03:30 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=65960 00:07:38.116 13:03:30 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:38.116 13:03:30 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:38.116 13:03:30 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 65960 00:07:38.116 13:03:30 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 65960 ']' 00:07:38.116 13:03:30 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.116 13:03:30 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:38.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.116 13:03:30 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.116 13:03:30 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:38.116 13:03:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:38.374 [2024-07-25 13:03:30.325397] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:38.374 [2024-07-25 13:03:30.325656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65960 ] 00:07:38.374 [2024-07-25 13:03:30.513656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.633 [2024-07-25 13:03:30.786138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.591 13:03:31 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:39.591 13:03:31 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:07:39.591 13:03:31 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:39.591 13:03:31 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:07:39.591 13:03:31 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:39.849 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:40.107 Waiting for block devices as requested 00:07:40.107 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:40.107 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:40.365 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:40.365 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:45.648 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:45.648 13:03:37 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:45.648 13:03:37 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:07:45.648 13:03:37 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:07:45.648 13:03:37 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # local nvme bdf 00:07:45.648 13:03:37 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:45.648 13:03:37 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:07:45.648 13:03:37 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:07:45.648 13:03:37 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:45.648 13:03:37 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:45.648 13:03:37 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:45.648 13:03:37 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:07:45.648 13:03:37 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:07:45.648 13:03:37 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:45.648 13:03:37 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:45.648 13:03:37 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:45.648 13:03:37 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:07:45.648 13:03:37 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:07:45.648 13:03:37 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:45.648 13:03:37 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:45.648 13:03:37 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:45.648 13:03:37 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:07:45.648 13:03:37 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:07:45.648 13:03:37 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:45.648 13:03:37 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:45.648 13:03:37 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:45.648 13:03:37 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:07:45.648 13:03:37 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:07:45.648 13:03:37 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:45.648 13:03:37 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:45.648 13:03:37 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:45.648 13:03:37 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:07:45.648 13:03:37 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:07:45.648 13:03:37 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:45.648 13:03:37 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:45.648 13:03:37 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:45.648 13:03:37 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:07:45.648 13:03:37 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:07:45.648 13:03:37 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:45.648 13:03:37 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:45.648 13:03:37 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:45.648 13:03:37 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:45.648 13:03:37 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:45.648 13:03:37 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:45.648 13:03:37 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:45.648 13:03:37 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:45.648 13:03:37 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:45.648 13:03:37 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:45.648 BYT; 00:07:45.648 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:45.648 13:03:37 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:45.648 BYT; 00:07:45.648 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:45.648 13:03:37 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:45.648 13:03:37 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:45.648 13:03:37 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:45.648 13:03:37 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:45.648 13:03:37 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:45.648 13:03:37 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:45.648 13:03:37 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:45.648 13:03:37 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:07:45.648 13:03:37 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:45.648 13:03:37 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:45.648 13:03:37 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:07:45.648 13:03:37 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:07:45.648 13:03:37 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:45.648 13:03:37 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:45.648 13:03:37 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:45.648 13:03:37 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:45.648 13:03:37 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:45.648 13:03:37 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:45.648 13:03:37 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:07:45.648 13:03:37 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:45.648 13:03:37 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:45.648 13:03:37 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:07:45.648 13:03:37 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:07:45.648 13:03:37 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:45.648 13:03:37 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:45.648 13:03:37 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:45.648 13:03:37 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:45.648 13:03:37 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:45.648 13:03:37 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:46.616 The operation has completed successfully. 00:07:46.616 13:03:38 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:47.550 The operation has completed successfully. 00:07:47.550 13:03:39 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:48.209 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:48.775 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:48.775 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:48.775 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:48.775 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:48.775 13:03:40 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:48.775 13:03:40 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.775 13:03:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:48.775 [] 00:07:48.775 13:03:40 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.775 13:03:40 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:48.775 13:03:40 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:48.775 13:03:40 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:48.775 13:03:40 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:48.775 13:03:40 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:48.775 13:03:40 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.775 13:03:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:49.032 13:03:41 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.032 13:03:41 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:49.032 13:03:41 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.032 13:03:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:49.291 13:03:41 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.291 13:03:41 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:49.292 13:03:41 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:49.292 13:03:41 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.292 13:03:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:49.292 13:03:41 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.292 13:03:41 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:49.292 13:03:41 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.292 13:03:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:49.292 13:03:41 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.292 13:03:41 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:49.292 13:03:41 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.292 13:03:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:49.292 13:03:41 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.292 13:03:41 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:49.292 13:03:41 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:49.292 13:03:41 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:49.292 13:03:41 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.292 13:03:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:49.292 13:03:41 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.292 13:03:41 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:49.292 13:03:41 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:49.292 13:03:41 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "38a6e8cc-c10c-428c-bb58-d4146c39522d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "38a6e8cc-c10c-428c-bb58-d4146c39522d",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "5b58dc02-e487-4696-b1b8-01461220ee84"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5b58dc02-e487-4696-b1b8-01461220ee84",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "69362522-1308-42e9-b25e-e427a940578e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "69362522-1308-42e9-b25e-e427a940578e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "02fd6783-3aa7-466a-b859-0deeadd29ac2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "02fd6783-3aa7-466a-b859-0deeadd29ac2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "22e61ed8-12ef-4858-9a0f-8c06ec4d382a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "22e61ed8-12ef-4858-9a0f-8c06ec4d382a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:49.292 13:03:41 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:49.292 13:03:41 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:49.292 13:03:41 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:49.292 13:03:41 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 65960 00:07:49.292 13:03:41 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 65960 ']' 00:07:49.292 13:03:41 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 65960 00:07:49.292 13:03:41 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:07:49.292 13:03:41 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:49.292 13:03:41 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65960 00:07:49.550 13:03:41 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:49.550 13:03:41 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:49.550 killing process with pid 65960 00:07:49.550 13:03:41 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65960' 00:07:49.550 13:03:41 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 65960 00:07:49.550 13:03:41 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 65960 00:07:51.448 13:03:43 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:51.448 13:03:43 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:51.448 13:03:43 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:51.448 13:03:43 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.448 13:03:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:51.706 ************************************ 00:07:51.706 START TEST bdev_hello_world 00:07:51.706 ************************************ 00:07:51.706 13:03:43 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:51.706 [2024-07-25 13:03:43.734852] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:51.706 [2024-07-25 13:03:43.735009] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66597 ] 00:07:51.706 [2024-07-25 13:03:43.895988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.965 [2024-07-25 13:03:44.085552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.532 [2024-07-25 13:03:44.697632] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:52.532 [2024-07-25 13:03:44.697712] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:52.532 [2024-07-25 13:03:44.697749] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:52.532 [2024-07-25 13:03:44.700815] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:52.532 [2024-07-25 13:03:44.701334] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:52.532 [2024-07-25 13:03:44.701377] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:52.532 [2024-07-25 13:03:44.701585] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:52.532 00:07:52.532 [2024-07-25 13:03:44.701636] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:53.907 00:07:53.907 real 0m2.266s 00:07:53.907 user 0m1.941s 00:07:53.907 sys 0m0.212s 00:07:53.907 13:03:45 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:53.907 13:03:45 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:53.907 ************************************ 00:07:53.907 END TEST bdev_hello_world 00:07:53.907 ************************************ 00:07:53.907 13:03:45 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:53.907 13:03:45 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:53.907 13:03:45 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:53.907 13:03:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:53.907 ************************************ 00:07:53.907 START TEST bdev_bounds 00:07:53.907 ************************************ 00:07:53.907 13:03:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:07:53.907 13:03:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=66639 00:07:53.907 13:03:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:53.907 13:03:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:53.907 Process bdevio pid: 66639 00:07:53.908 13:03:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 66639' 00:07:53.908 13:03:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 66639 00:07:53.908 13:03:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 66639 ']' 00:07:53.908 13:03:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.908 13:03:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:53.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.908 13:03:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.908 13:03:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:53.908 13:03:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:53.908 [2024-07-25 13:03:46.060179] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:53.908 [2024-07-25 13:03:46.060413] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66639 ] 00:07:54.166 [2024-07-25 13:03:46.244392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:54.424 [2024-07-25 13:03:46.469619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.424 [2024-07-25 13:03:46.469785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.424 [2024-07-25 13:03:46.469800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:55.361 13:03:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:55.361 13:03:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:55.361 13:03:47 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:55.361 I/O targets: 00:07:55.361 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:55.361 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:55.361 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:55.361 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:55.361 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:55.361 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:55.361 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:55.361 00:07:55.361 00:07:55.361 CUnit - A unit testing framework for C - Version 2.1-3 00:07:55.361 http://cunit.sourceforge.net/ 00:07:55.361 00:07:55.361 00:07:55.361 Suite: bdevio tests on: Nvme3n1 00:07:55.361 Test: blockdev write read block ...passed 00:07:55.361 Test: blockdev write zeroes read block ...passed 00:07:55.361 Test: blockdev write zeroes read no split ...passed 00:07:55.361 Test: blockdev write zeroes read split ...passed 00:07:55.361 Test: blockdev write zeroes read split partial ...passed 00:07:55.361 Test: blockdev reset ...[2024-07-25 13:03:47.463512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:55.361 [2024-07-25 13:03:47.468463] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:55.361 passed 00:07:55.361 Test: blockdev write read 8 blocks ...passed 00:07:55.361 Test: blockdev write read size > 128k ...passed 00:07:55.361 Test: blockdev write read invalid size ...passed 00:07:55.361 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:55.361 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:55.361 Test: blockdev write read max offset ...passed 00:07:55.361 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:55.361 Test: blockdev writev readv 8 blocks ...passed 00:07:55.361 Test: blockdev writev readv 30 x 1block ...passed 00:07:55.361 Test: blockdev writev readv block ...passed 00:07:55.361 Test: blockdev writev readv size > 128k ...passed 00:07:55.361 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:55.361 Test: blockdev comparev and writev ...[2024-07-25 13:03:47.477569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x280c06000 len:0x1000 00:07:55.361 [2024-07-25 13:03:47.477655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:55.361 passed 00:07:55.361 Test: blockdev nvme passthru rw ...passed 00:07:55.361 Test: blockdev nvme passthru vendor specific ...[2024-07-25 13:03:47.478537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:55.361 [2024-07-25 13:03:47.478588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:55.361 passed 00:07:55.361 Test: blockdev nvme admin passthru ...passed 00:07:55.361 Test: blockdev copy ...passed 00:07:55.361 Suite: bdevio tests on: Nvme2n3 00:07:55.361 Test: blockdev write read block ...passed 00:07:55.361 Test: blockdev write zeroes read block ...passed 00:07:55.361 Test: blockdev write zeroes read no split ...passed 00:07:55.361 Test: blockdev write zeroes read split ...passed 00:07:55.637 Test: blockdev write zeroes read split partial ...passed 00:07:55.637 Test: blockdev reset ...[2024-07-25 13:03:47.566247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:55.637 [2024-07-25 13:03:47.570788] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:55.637 passed 00:07:55.637 Test: blockdev write read 8 blocks ...passed 00:07:55.637 Test: blockdev write read size > 128k ...passed 00:07:55.637 Test: blockdev write read invalid size ...passed 00:07:55.637 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:55.637 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:55.637 Test: blockdev write read max offset ...passed 00:07:55.637 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:55.637 Test: blockdev writev readv 8 blocks ...passed 00:07:55.637 Test: blockdev writev readv 30 x 1block ...passed 00:07:55.637 Test: blockdev writev readv block ...passed 00:07:55.637 Test: blockdev writev readv size > 128k ...passed 00:07:55.637 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:55.637 Test: blockdev comparev and writev ...[2024-07-25 13:03:47.580659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x282c3c000 len:0x1000 00:07:55.637 [2024-07-25 13:03:47.580774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:55.637 passed 00:07:55.637 Test: blockdev nvme passthru rw ...passed 00:07:55.637 Test: blockdev nvme passthru vendor specific ...[2024-07-25 13:03:47.581808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:55.637 [2024-07-25 13:03:47.581876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:55.637 passed 00:07:55.637 Test: blockdev nvme admin passthru ...passed 00:07:55.637 Test: blockdev copy ...passed 00:07:55.637 Suite: bdevio tests on: Nvme2n2 00:07:55.637 Test: blockdev write read block ...passed 00:07:55.637 Test: blockdev write zeroes read block ...passed 00:07:55.637 Test: blockdev write zeroes read no split ...passed 00:07:55.637 Test: blockdev write zeroes read split ...passed 00:07:55.637 Test: blockdev write zeroes read split partial ...passed 00:07:55.637 Test: blockdev reset ...[2024-07-25 13:03:47.658956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:55.637 [2024-07-25 13:03:47.664145] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:55.637 passed 00:07:55.637 Test: blockdev write read 8 blocks ...passed 00:07:55.637 Test: blockdev write read size > 128k ...passed 00:07:55.637 Test: blockdev write read invalid size ...passed 00:07:55.637 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:55.638 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:55.638 Test: blockdev write read max offset ...passed 00:07:55.638 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:55.638 Test: blockdev writev readv 8 blocks ...passed 00:07:55.638 Test: blockdev writev readv 30 x 1block ...passed 00:07:55.638 Test: blockdev writev readv block ...passed 00:07:55.638 Test: blockdev writev readv size > 128k ...passed 00:07:55.638 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:55.638 Test: blockdev comparev and writev ...[2024-07-25 13:03:47.674502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x282c36000 len:0x1000 00:07:55.638 [2024-07-25 13:03:47.674616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:55.638 passed 00:07:55.638 Test: blockdev nvme passthru rw ...passed 00:07:55.638 Test: blockdev nvme passthru vendor specific ...passed 00:07:55.638 Test: blockdev nvme admin passthru ...[2024-07-25 13:03:47.675667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:55.638 [2024-07-25 13:03:47.675737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:55.638 passed 00:07:55.638 Test: blockdev copy ...passed 00:07:55.638 Suite: bdevio tests on: Nvme2n1 00:07:55.638 Test: blockdev write read block ...passed 00:07:55.638 Test: blockdev write zeroes read block ...passed 00:07:55.638 Test: blockdev write zeroes read no split ...passed 00:07:55.638 Test: blockdev write zeroes read split ...passed 00:07:55.638 Test: blockdev write zeroes read split partial ...passed 00:07:55.638 Test: blockdev reset ...[2024-07-25 13:03:47.765794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:55.638 passed 00:07:55.638 Test: blockdev write read 8 blocks ...[2024-07-25 13:03:47.770328] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:55.638 passed 00:07:55.638 Test: blockdev write read size > 128k ...passed 00:07:55.638 Test: blockdev write read invalid size ...passed 00:07:55.638 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:55.638 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:55.638 Test: blockdev write read max offset ...passed 00:07:55.638 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:55.638 Test: blockdev writev readv 8 blocks ...passed 00:07:55.638 Test: blockdev writev readv 30 x 1block ...passed 00:07:55.638 Test: blockdev writev readv block ...passed 00:07:55.638 Test: blockdev writev readv size > 128k ...passed 00:07:55.638 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:55.638 Test: blockdev comparev and writev ...[2024-07-25 13:03:47.779100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x282c32000 len:0x1000 00:07:55.638 [2024-07-25 13:03:47.779204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:55.638 passed 00:07:55.638 Test: blockdev nvme passthru rw ...passed 00:07:55.638 Test: blockdev nvme passthru vendor specific ...[2024-07-25 13:03:47.780145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:55.638 [2024-07-25 13:03:47.780194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:55.638 passed 00:07:55.638 Test: blockdev nvme admin passthru ...passed 00:07:55.638 Test: blockdev copy ...passed 00:07:55.638 Suite: bdevio tests on: Nvme1n1p2 00:07:55.638 Test: blockdev write read block ...passed 00:07:55.638 Test: blockdev write zeroes read block ...passed 00:07:55.638 Test: blockdev write zeroes read no split ...passed 00:07:55.905 Test: blockdev write zeroes read split ...passed 00:07:55.905 Test: blockdev write zeroes read split partial ...passed 00:07:55.905 Test: blockdev reset ...[2024-07-25 13:03:47.854752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:55.905 [2024-07-25 13:03:47.859566] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:55.905 passed 00:07:55.905 Test: blockdev write read 8 blocks ...passed 00:07:55.905 Test: blockdev write read size > 128k ...passed 00:07:55.905 Test: blockdev write read invalid size ...passed 00:07:55.905 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:55.905 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:55.905 Test: blockdev write read max offset ...passed 00:07:55.905 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:55.905 Test: blockdev writev readv 8 blocks ...passed 00:07:55.905 Test: blockdev writev readv 30 x 1block ...passed 00:07:55.905 Test: blockdev writev readv block ...passed 00:07:55.905 Test: blockdev writev readv size > 128k ...passed 00:07:55.905 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:55.905 Test: blockdev comparev and writev ...[2024-07-25 13:03:47.870776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x282c2e000 len:0x1000 00:07:55.905 [2024-07-25 13:03:47.870897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:55.905 passed 00:07:55.905 Test: blockdev nvme passthru rw ...passed 00:07:55.905 Test: blockdev nvme passthru vendor specific ...passed 00:07:55.905 Test: blockdev nvme admin passthru ...passed 00:07:55.905 Test: blockdev copy ...passed 00:07:55.905 Suite: bdevio tests on: Nvme1n1p1 00:07:55.905 Test: blockdev write read block ...passed 00:07:55.905 Test: blockdev write zeroes read block ...passed 00:07:55.905 Test: blockdev write zeroes read no split ...passed 00:07:55.905 Test: blockdev write zeroes read split ...passed 00:07:55.905 Test: blockdev write zeroes read split partial ...passed 00:07:55.905 Test: blockdev reset ...[2024-07-25 13:03:47.958118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:55.905 [2024-07-25 13:03:47.962834] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:55.905 passed 00:07:55.905 Test: blockdev write read 8 blocks ...passed 00:07:55.905 Test: blockdev write read size > 128k ...passed 00:07:55.905 Test: blockdev write read invalid size ...passed 00:07:55.905 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:55.905 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:55.905 Test: blockdev write read max offset ...passed 00:07:55.905 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:55.905 Test: blockdev writev readv 8 blocks ...passed 00:07:55.905 Test: blockdev writev readv 30 x 1block ...passed 00:07:55.905 Test: blockdev writev readv block ...passed 00:07:55.905 Test: blockdev writev readv size > 128k ...passed 00:07:55.905 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:55.905 Test: blockdev comparev and writev ...[2024-07-25 13:03:47.973620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x283a0e000 len:0x1000 00:07:55.905 [2024-07-25 13:03:47.973736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:55.905 passed 00:07:55.905 Test: blockdev nvme passthru rw ...passed 00:07:55.905 Test: blockdev nvme passthru vendor specific ...passed 00:07:55.905 Test: blockdev nvme admin passthru ...passed 00:07:55.905 Test: blockdev copy ...passed 00:07:55.906 Suite: bdevio tests on: Nvme0n1 00:07:55.906 Test: blockdev write read block ...passed 00:07:55.906 Test: blockdev write zeroes read block ...passed 00:07:55.906 Test: blockdev write zeroes read no split ...passed 00:07:55.906 Test: blockdev write zeroes read split ...passed 00:07:55.906 Test: blockdev write zeroes read split partial ...passed 00:07:55.906 Test: blockdev reset ...[2024-07-25 13:03:48.057157] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:55.906 [2024-07-25 13:03:48.061342] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:55.906 passed 00:07:55.906 Test: blockdev write read 8 blocks ...passed 00:07:55.906 Test: blockdev write read size > 128k ...passed 00:07:55.906 Test: blockdev write read invalid size ...passed 00:07:55.906 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:55.906 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:55.906 Test: blockdev write read max offset ...passed 00:07:55.906 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:55.906 Test: blockdev writev readv 8 blocks ...passed 00:07:55.906 Test: blockdev writev readv 30 x 1block ...passed 00:07:55.906 Test: blockdev writev readv block ...passed 00:07:55.906 Test: blockdev writev readv size > 128k ...passed 00:07:55.906 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:55.906 Test: blockdev comparev and writev ...passed 00:07:55.906 Test: blockdev nvme passthru rw ...[2024-07-25 13:03:48.069681] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:55.906 separate metadata which is not supported yet. 00:07:55.906 passed 00:07:55.906 Test: blockdev nvme passthru vendor specific ...[2024-07-25 13:03:48.070238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:55.906 passed 00:07:55.906 Test: blockdev nvme admin passthru ...[2024-07-25 13:03:48.070314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:55.906 passed 00:07:55.906 Test: blockdev copy ...passed 00:07:55.906 00:07:55.906 Run Summary: Type Total Ran Passed Failed Inactive 00:07:55.906 suites 7 7 n/a 0 0 00:07:55.906 tests 161 161 161 0 0 00:07:55.906 asserts 1025 1025 1025 0 n/a 00:07:55.906 00:07:55.906 Elapsed time = 1.922 seconds 00:07:55.906 0 00:07:56.165 13:03:48 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 66639 00:07:56.165 13:03:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 66639 ']' 00:07:56.165 13:03:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 66639 00:07:56.165 13:03:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:56.165 13:03:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:56.165 13:03:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66639 00:07:56.165 13:03:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:56.165 13:03:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:56.165 13:03:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66639' 00:07:56.165 killing process with pid 66639 00:07:56.165 13:03:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 66639 00:07:56.165 13:03:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 66639 00:07:57.539 13:03:49 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:57.539 00:07:57.539 real 0m3.482s 00:07:57.539 user 0m8.851s 00:07:57.539 sys 0m0.407s 00:07:57.539 13:03:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:57.539 13:03:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:57.539 ************************************ 00:07:57.539 END TEST bdev_bounds 00:07:57.539 ************************************ 00:07:57.539 13:03:49 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:57.539 13:03:49 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:57.539 13:03:49 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.539 13:03:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:57.539 ************************************ 00:07:57.539 START TEST bdev_nbd 00:07:57.539 ************************************ 00:07:57.539 13:03:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:57.539 13:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:57.539 13:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:57.539 13:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.539 13:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:57.539 13:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:57.539 13:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:57.539 13:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:57.539 13:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:57.539 13:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:57.539 13:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:57.539 13:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:57.539 13:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:57.539 13:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:57.539 13:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:57.539 13:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:57.539 13:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=66710 00:07:57.539 13:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:57.539 13:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:57.539 13:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 66710 /var/tmp/spdk-nbd.sock 00:07:57.539 13:03:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 66710 ']' 00:07:57.539 13:03:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:57.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:57.539 13:03:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:57.539 13:03:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:57.539 13:03:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:57.539 13:03:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:57.539 [2024-07-25 13:03:49.625815] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:57.539 [2024-07-25 13:03:49.626045] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:57.797 [2024-07-25 13:03:49.801427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.054 [2024-07-25 13:03:49.989357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.620 13:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:58.620 13:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:07:58.620 13:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:58.620 13:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:58.620 13:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:58.620 13:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:58.620 13:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:58.620 13:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:58.620 13:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:58.620 13:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:58.620 13:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:58.620 13:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:58.620 13:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:58.620 13:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:58.620 13:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:58.878 13:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:58.878 13:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:58.878 13:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:58.878 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:58.878 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:58.878 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:58.878 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:58.878 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:58.878 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:58.878 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:58.878 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:58.878 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.878 1+0 records in 00:07:58.878 1+0 records out 00:07:58.878 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000744631 s, 5.5 MB/s 00:07:58.878 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.878 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:58.878 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.878 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:58.878 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:58.878 13:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:58.878 13:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:58.878 13:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:59.473 13:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:59.473 13:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:59.473 13:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:59.473 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:59.473 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:59.473 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:59.473 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:59.473 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:59.473 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:59.473 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:59.473 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:59.473 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:59.473 1+0 records in 00:07:59.473 1+0 records out 00:07:59.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000728076 s, 5.6 MB/s 00:07:59.473 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.473 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:59.473 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.473 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:59.473 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:59.473 13:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:59.473 13:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:59.473 13:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:59.732 13:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:59.732 13:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:59.732 13:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:59.732 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:59.732 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:59.732 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:59.732 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:59.732 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:59.732 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:59.732 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:59.732 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:59.732 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:59.732 1+0 records in 00:07:59.732 1+0 records out 00:07:59.732 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000739148 s, 5.5 MB/s 00:07:59.732 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.732 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:59.732 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.732 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:59.732 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:59.733 13:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:59.733 13:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:59.733 13:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:59.990 13:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:59.990 13:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:59.990 13:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:59.990 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:07:59.990 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:59.990 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:59.990 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:59.990 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:07:59.990 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:59.990 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:59.990 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:59.990 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:59.990 1+0 records in 00:07:59.990 1+0 records out 00:07:59.990 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000659837 s, 6.2 MB/s 00:07:59.990 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.990 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:59.990 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.990 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:59.991 13:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:59.991 13:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:59.991 13:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:59.991 13:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:00.249 13:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:00.249 13:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:00.249 13:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:00.249 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:08:00.249 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:00.249 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:00.249 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:00.249 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:08:00.249 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:00.249 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:00.249 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:00.249 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:00.249 1+0 records in 00:08:00.249 1+0 records out 00:08:00.249 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000945143 s, 4.3 MB/s 00:08:00.249 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:00.249 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:00.249 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:00.249 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:00.249 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:00.249 13:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:00.249 13:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:00.249 13:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:00.508 13:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:00.508 13:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:00.508 13:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:00.508 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:08:00.508 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:00.508 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:00.508 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:00.508 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:08:00.508 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:00.508 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:00.508 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:00.508 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:00.508 1+0 records in 00:08:00.508 1+0 records out 00:08:00.508 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000786679 s, 5.2 MB/s 00:08:00.508 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:00.508 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:00.508 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:00.508 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:00.508 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:00.508 13:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:00.508 13:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:00.508 13:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:00.766 13:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:00.766 13:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:00.766 13:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:00.766 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:08:00.766 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:00.766 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:00.766 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:00.766 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:08:00.766 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:00.766 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:00.766 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:00.766 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:00.766 1+0 records in 00:08:00.766 1+0 records out 00:08:00.766 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.002623 s, 1.6 MB/s 00:08:01.025 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.025 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:01.025 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.025 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:01.025 13:03:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:01.025 13:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:01.025 13:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:01.025 13:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:01.304 13:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:01.304 { 00:08:01.304 "nbd_device": "/dev/nbd0", 00:08:01.304 "bdev_name": "Nvme0n1" 00:08:01.304 }, 00:08:01.304 { 00:08:01.304 "nbd_device": "/dev/nbd1", 00:08:01.304 "bdev_name": "Nvme1n1p1" 00:08:01.304 }, 00:08:01.304 { 00:08:01.304 "nbd_device": "/dev/nbd2", 00:08:01.304 "bdev_name": "Nvme1n1p2" 00:08:01.304 }, 00:08:01.304 { 00:08:01.304 "nbd_device": "/dev/nbd3", 00:08:01.304 "bdev_name": "Nvme2n1" 00:08:01.304 }, 00:08:01.304 { 00:08:01.304 "nbd_device": "/dev/nbd4", 00:08:01.304 "bdev_name": "Nvme2n2" 00:08:01.304 }, 00:08:01.304 { 00:08:01.304 "nbd_device": "/dev/nbd5", 00:08:01.304 "bdev_name": "Nvme2n3" 00:08:01.304 }, 00:08:01.304 { 00:08:01.304 "nbd_device": "/dev/nbd6", 00:08:01.304 "bdev_name": "Nvme3n1" 00:08:01.304 } 00:08:01.304 ]' 00:08:01.304 13:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:01.304 13:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:01.304 { 00:08:01.304 "nbd_device": "/dev/nbd0", 00:08:01.304 "bdev_name": "Nvme0n1" 00:08:01.304 }, 00:08:01.304 { 00:08:01.304 "nbd_device": "/dev/nbd1", 00:08:01.304 "bdev_name": "Nvme1n1p1" 00:08:01.304 }, 00:08:01.304 { 00:08:01.304 "nbd_device": "/dev/nbd2", 00:08:01.304 "bdev_name": "Nvme1n1p2" 00:08:01.304 }, 00:08:01.304 { 00:08:01.304 "nbd_device": "/dev/nbd3", 00:08:01.304 "bdev_name": "Nvme2n1" 00:08:01.304 }, 00:08:01.304 { 00:08:01.304 "nbd_device": "/dev/nbd4", 00:08:01.304 "bdev_name": "Nvme2n2" 00:08:01.304 }, 00:08:01.304 { 00:08:01.304 "nbd_device": "/dev/nbd5", 00:08:01.304 "bdev_name": "Nvme2n3" 00:08:01.304 }, 00:08:01.304 { 00:08:01.304 "nbd_device": "/dev/nbd6", 00:08:01.304 "bdev_name": "Nvme3n1" 00:08:01.304 } 00:08:01.304 ]' 00:08:01.304 13:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:01.304 13:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:01.304 13:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.304 13:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:01.304 13:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:01.304 13:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:01.304 13:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.304 13:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:01.564 13:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:01.564 13:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:01.564 13:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:01.564 13:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.564 13:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.564 13:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:01.564 13:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:01.564 13:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.564 13:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.564 13:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:01.822 13:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:01.822 13:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:01.822 13:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:01.822 13:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.822 13:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.822 13:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:01.822 13:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:01.822 13:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.822 13:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.822 13:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:02.080 13:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:02.080 13:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:02.080 13:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:02.080 13:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.080 13:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.080 13:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:02.080 13:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:02.080 13:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.080 13:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.080 13:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:02.340 13:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:02.340 13:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:02.340 13:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:02.340 13:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.340 13:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.340 13:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:02.340 13:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:02.340 13:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.340 13:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.340 13:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:02.598 13:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:02.598 13:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:02.598 13:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:02.598 13:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.598 13:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.598 13:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:02.598 13:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:02.598 13:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.598 13:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.598 13:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:02.856 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:02.856 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:02.856 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:02.856 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.856 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.856 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:02.856 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:02.856 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.856 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.856 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:03.440 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:03.440 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:03.440 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:03.440 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.440 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.440 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:03.440 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:03.440 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.440 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:03.440 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.440 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:03.698 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:03.698 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:03.698 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:03.698 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:03.698 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:03.698 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:03.698 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:03.698 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:03.698 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:03.698 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:03.698 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:03.698 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:03.698 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:03.698 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.698 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:03.698 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:03.698 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:03.698 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:03.698 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:03.698 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.698 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:03.698 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:03.698 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:03.698 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:03.698 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:03.698 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:03.698 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:03.698 13:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:03.958 /dev/nbd0 00:08:03.958 13:03:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:03.958 13:03:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:03.958 13:03:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:03.958 13:03:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:03.958 13:03:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:03.958 13:03:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:03.958 13:03:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:04.215 13:03:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:04.215 13:03:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:04.215 13:03:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:04.215 13:03:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.215 1+0 records in 00:08:04.215 1+0 records out 00:08:04.215 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00068647 s, 6.0 MB/s 00:08:04.215 13:03:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.215 13:03:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:04.215 13:03:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.215 13:03:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:04.215 13:03:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:04.215 13:03:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:04.216 13:03:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:04.216 13:03:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:08:04.474 /dev/nbd1 00:08:04.474 13:03:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:04.474 13:03:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:04.474 13:03:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:04.474 13:03:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:04.474 13:03:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:04.474 13:03:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:04.474 13:03:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:04.474 13:03:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:04.474 13:03:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:04.474 13:03:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:04.474 13:03:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.474 1+0 records in 00:08:04.474 1+0 records out 00:08:04.474 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000606128 s, 6.8 MB/s 00:08:04.474 13:03:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.474 13:03:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:04.474 13:03:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.474 13:03:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:04.474 13:03:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:04.474 13:03:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:04.474 13:03:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:04.474 13:03:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:08:05.042 /dev/nbd10 00:08:05.042 13:03:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:05.042 13:03:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:05.042 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:08:05.042 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:05.042 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:05.042 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:05.042 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:08:05.042 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:05.042 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:05.042 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:05.042 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.042 1+0 records in 00:08:05.042 1+0 records out 00:08:05.042 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000745236 s, 5.5 MB/s 00:08:05.042 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.042 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:05.042 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.042 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:05.042 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:05.042 13:03:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.042 13:03:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:05.042 13:03:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:05.301 /dev/nbd11 00:08:05.301 13:03:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:05.301 13:03:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:05.301 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:08:05.301 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:05.301 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:05.301 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:05.301 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:08:05.301 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:05.301 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:05.301 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:05.301 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.301 1+0 records in 00:08:05.301 1+0 records out 00:08:05.301 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00082042 s, 5.0 MB/s 00:08:05.301 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.301 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:05.301 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.301 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:05.301 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:05.301 13:03:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.301 13:03:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:05.301 13:03:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:05.560 /dev/nbd12 00:08:05.560 13:03:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:05.560 13:03:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:05.560 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:08:05.560 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:05.560 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:05.560 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:05.560 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:08:05.560 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:05.560 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:05.560 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:05.560 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.560 1+0 records in 00:08:05.560 1+0 records out 00:08:05.560 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000820639 s, 5.0 MB/s 00:08:05.560 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.560 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:05.560 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.560 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:05.560 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:05.560 13:03:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.560 13:03:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:05.560 13:03:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:05.818 /dev/nbd13 00:08:05.818 13:03:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:05.818 13:03:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:05.818 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:08:05.818 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:05.818 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:05.818 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:05.818 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:08:05.818 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:05.818 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:05.818 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:05.818 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.818 1+0 records in 00:08:05.818 1+0 records out 00:08:05.819 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000653368 s, 6.3 MB/s 00:08:05.819 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.819 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:05.819 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.819 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:05.819 13:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:05.819 13:03:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.819 13:03:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:05.819 13:03:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:06.384 /dev/nbd14 00:08:06.384 13:03:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:06.384 13:03:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:06.384 13:03:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:08:06.384 13:03:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:06.384 13:03:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:06.384 13:03:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:06.384 13:03:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:08:06.384 13:03:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:06.384 13:03:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:06.384 13:03:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:06.384 13:03:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:06.384 1+0 records in 00:08:06.384 1+0 records out 00:08:06.384 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000760057 s, 5.4 MB/s 00:08:06.384 13:03:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.384 13:03:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:06.384 13:03:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.384 13:03:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:06.384 13:03:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:06.384 13:03:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:06.384 13:03:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:06.384 13:03:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:06.384 13:03:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.384 13:03:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:06.384 13:03:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:06.384 { 00:08:06.384 "nbd_device": "/dev/nbd0", 00:08:06.384 "bdev_name": "Nvme0n1" 00:08:06.384 }, 00:08:06.384 { 00:08:06.384 "nbd_device": "/dev/nbd1", 00:08:06.384 "bdev_name": "Nvme1n1p1" 00:08:06.384 }, 00:08:06.384 { 00:08:06.384 "nbd_device": "/dev/nbd10", 00:08:06.384 "bdev_name": "Nvme1n1p2" 00:08:06.384 }, 00:08:06.384 { 00:08:06.384 "nbd_device": "/dev/nbd11", 00:08:06.384 "bdev_name": "Nvme2n1" 00:08:06.384 }, 00:08:06.384 { 00:08:06.384 "nbd_device": "/dev/nbd12", 00:08:06.384 "bdev_name": "Nvme2n2" 00:08:06.384 }, 00:08:06.384 { 00:08:06.384 "nbd_device": "/dev/nbd13", 00:08:06.384 "bdev_name": "Nvme2n3" 00:08:06.384 }, 00:08:06.384 { 00:08:06.384 "nbd_device": "/dev/nbd14", 00:08:06.384 "bdev_name": "Nvme3n1" 00:08:06.384 } 00:08:06.384 ]' 00:08:06.643 13:03:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:06.643 { 00:08:06.643 "nbd_device": "/dev/nbd0", 00:08:06.643 "bdev_name": "Nvme0n1" 00:08:06.643 }, 00:08:06.643 { 00:08:06.643 "nbd_device": "/dev/nbd1", 00:08:06.643 "bdev_name": "Nvme1n1p1" 00:08:06.643 }, 00:08:06.643 { 00:08:06.643 "nbd_device": "/dev/nbd10", 00:08:06.643 "bdev_name": "Nvme1n1p2" 00:08:06.643 }, 00:08:06.643 { 00:08:06.643 "nbd_device": "/dev/nbd11", 00:08:06.643 "bdev_name": "Nvme2n1" 00:08:06.643 }, 00:08:06.643 { 00:08:06.643 "nbd_device": "/dev/nbd12", 00:08:06.643 "bdev_name": "Nvme2n2" 00:08:06.643 }, 00:08:06.643 { 00:08:06.643 "nbd_device": "/dev/nbd13", 00:08:06.643 "bdev_name": "Nvme2n3" 00:08:06.643 }, 00:08:06.643 { 00:08:06.643 "nbd_device": "/dev/nbd14", 00:08:06.643 "bdev_name": "Nvme3n1" 00:08:06.643 } 00:08:06.643 ]' 00:08:06.643 13:03:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:06.643 13:03:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:06.643 /dev/nbd1 00:08:06.643 /dev/nbd10 00:08:06.643 /dev/nbd11 00:08:06.643 /dev/nbd12 00:08:06.643 /dev/nbd13 00:08:06.643 /dev/nbd14' 00:08:06.643 13:03:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:06.643 /dev/nbd1 00:08:06.643 /dev/nbd10 00:08:06.643 /dev/nbd11 00:08:06.643 /dev/nbd12 00:08:06.643 /dev/nbd13 00:08:06.643 /dev/nbd14' 00:08:06.643 13:03:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:06.643 13:03:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:08:06.643 13:03:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:08:06.643 13:03:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:08:06.643 13:03:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:06.643 13:03:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:06.643 13:03:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:06.643 13:03:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:06.643 13:03:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:06.643 13:03:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:06.643 13:03:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:06.643 13:03:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:06.643 256+0 records in 00:08:06.643 256+0 records out 00:08:06.643 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00713655 s, 147 MB/s 00:08:06.643 13:03:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:06.643 13:03:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:06.643 256+0 records in 00:08:06.643 256+0 records out 00:08:06.643 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134627 s, 7.8 MB/s 00:08:06.643 13:03:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:06.643 13:03:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:06.901 256+0 records in 00:08:06.901 256+0 records out 00:08:06.901 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.161233 s, 6.5 MB/s 00:08:06.901 13:03:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:06.901 13:03:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:06.901 256+0 records in 00:08:06.901 256+0 records out 00:08:06.901 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134215 s, 7.8 MB/s 00:08:06.901 13:03:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:06.901 13:03:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:07.159 256+0 records in 00:08:07.159 256+0 records out 00:08:07.159 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154835 s, 6.8 MB/s 00:08:07.159 13:03:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:07.159 13:03:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:07.418 256+0 records in 00:08:07.418 256+0 records out 00:08:07.418 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144994 s, 7.2 MB/s 00:08:07.418 13:03:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:07.418 13:03:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:07.418 256+0 records in 00:08:07.418 256+0 records out 00:08:07.418 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140514 s, 7.5 MB/s 00:08:07.418 13:03:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:07.418 13:03:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:07.676 256+0 records in 00:08:07.676 256+0 records out 00:08:07.676 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138523 s, 7.6 MB/s 00:08:07.676 13:03:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:07.676 13:03:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:07.676 13:03:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:07.676 13:03:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:07.676 13:03:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:07.676 13:03:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:07.677 13:03:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:07.677 13:03:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:07.677 13:03:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:07.677 13:03:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:07.677 13:03:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:07.677 13:03:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:07.677 13:03:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:07.677 13:03:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:07.677 13:03:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:07.677 13:03:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:07.677 13:03:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:07.677 13:03:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:07.677 13:03:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:07.677 13:03:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:07.677 13:03:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:07.677 13:03:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:07.677 13:03:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:07.677 13:03:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:07.677 13:03:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:07.677 13:03:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:07.677 13:03:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:07.677 13:03:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.677 13:03:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:07.935 13:04:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:07.935 13:04:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:07.935 13:04:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:07.935 13:04:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.935 13:04:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.935 13:04:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:07.935 13:04:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.935 13:04:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.935 13:04:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.935 13:04:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:08.502 13:04:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:08.502 13:04:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:08.502 13:04:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:08.502 13:04:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:08.502 13:04:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:08.502 13:04:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:08.502 13:04:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:08.502 13:04:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:08.502 13:04:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:08.502 13:04:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:08.760 13:04:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:08.760 13:04:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:08.760 13:04:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:08.760 13:04:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:08.760 13:04:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:08.760 13:04:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:08.760 13:04:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:08.760 13:04:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:08.760 13:04:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:08.760 13:04:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:09.018 13:04:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:09.018 13:04:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:09.018 13:04:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:09.018 13:04:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:09.018 13:04:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:09.018 13:04:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:09.018 13:04:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:09.018 13:04:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:09.018 13:04:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:09.018 13:04:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:09.583 13:04:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:09.583 13:04:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:09.583 13:04:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:09.583 13:04:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:09.583 13:04:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:09.583 13:04:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:09.583 13:04:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:09.583 13:04:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:09.583 13:04:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:09.583 13:04:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:09.841 13:04:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:09.841 13:04:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:09.841 13:04:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:09.841 13:04:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:09.841 13:04:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:09.841 13:04:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:09.841 13:04:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:09.841 13:04:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:09.841 13:04:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:09.841 13:04:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:10.099 13:04:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:10.099 13:04:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:10.099 13:04:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:10.099 13:04:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:10.099 13:04:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:10.099 13:04:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:10.099 13:04:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:10.099 13:04:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:10.099 13:04:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:10.099 13:04:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:10.099 13:04:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:10.358 13:04:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:10.358 13:04:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:10.358 13:04:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:10.616 13:04:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:10.616 13:04:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:10.616 13:04:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:10.616 13:04:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:10.616 13:04:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:10.616 13:04:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:10.616 13:04:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:10.616 13:04:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:10.616 13:04:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:10.616 13:04:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:10.616 13:04:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:10.616 13:04:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:10.616 13:04:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:10.616 13:04:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:10.616 13:04:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:10.874 malloc_lvol_verify 00:08:10.874 13:04:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:11.132 127f49ee-01bb-44e9-aff6-f7e016099727 00:08:11.132 13:04:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:11.397 43a9c838-84e8-43f6-a768-fa47f02afc04 00:08:11.397 13:04:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:11.655 /dev/nbd0 00:08:11.655 13:04:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:11.655 mke2fs 1.46.5 (30-Dec-2021) 00:08:11.655 Discarding device blocks: 0/4096 done 00:08:11.655 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:11.655 00:08:11.655 Allocating group tables: 0/1 done 00:08:11.655 Writing inode tables: 0/1 done 00:08:11.655 Creating journal (1024 blocks): done 00:08:11.655 Writing superblocks and filesystem accounting information: 0/1 done 00:08:11.655 00:08:11.655 13:04:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:11.655 13:04:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:11.655 13:04:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:11.655 13:04:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:11.655 13:04:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:11.655 13:04:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:11.655 13:04:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.655 13:04:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:12.221 13:04:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:12.221 13:04:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:12.221 13:04:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:12.221 13:04:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.221 13:04:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.221 13:04:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:12.221 13:04:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.221 13:04:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.221 13:04:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:12.221 13:04:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:08:12.221 13:04:04 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 66710 00:08:12.221 13:04:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 66710 ']' 00:08:12.221 13:04:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 66710 00:08:12.221 13:04:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:08:12.221 13:04:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:12.221 13:04:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66710 00:08:12.221 killing process with pid 66710 00:08:12.221 13:04:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:12.221 13:04:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:12.222 13:04:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66710' 00:08:12.222 13:04:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 66710 00:08:12.222 13:04:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 66710 00:08:13.596 ************************************ 00:08:13.596 END TEST bdev_nbd 00:08:13.596 ************************************ 00:08:13.596 13:04:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:13.596 00:08:13.596 real 0m15.886s 00:08:13.596 user 0m22.797s 00:08:13.596 sys 0m5.096s 00:08:13.596 13:04:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:13.596 13:04:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:13.596 13:04:05 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:08:13.596 13:04:05 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:08:13.597 13:04:05 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:08:13.597 skipping fio tests on NVMe due to multi-ns failures. 00:08:13.597 13:04:05 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:13.597 13:04:05 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:13.597 13:04:05 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:13.597 13:04:05 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:08:13.597 13:04:05 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:13.597 13:04:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:13.597 ************************************ 00:08:13.597 START TEST bdev_verify 00:08:13.597 ************************************ 00:08:13.597 13:04:05 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:13.597 [2024-07-25 13:04:05.512049] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:13.597 [2024-07-25 13:04:05.512236] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67169 ] 00:08:13.597 [2024-07-25 13:04:05.675581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:13.854 [2024-07-25 13:04:05.864262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.854 [2024-07-25 13:04:05.864279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.421 Running I/O for 5 seconds... 00:08:19.684 00:08:19.684 Latency(us) 00:08:19.684 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.684 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:19.684 Verification LBA range: start 0x0 length 0xbd0bd 00:08:19.685 Nvme0n1 : 5.07 1313.55 5.13 0.00 0.00 97201.44 20733.21 106287.48 00:08:19.685 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:19.685 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:19.685 Nvme0n1 : 5.07 1363.18 5.32 0.00 0.00 93707.60 16562.73 91035.46 00:08:19.685 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:19.685 Verification LBA range: start 0x0 length 0x4ff80 00:08:19.685 Nvme1n1p1 : 5.07 1313.01 5.13 0.00 0.00 97019.04 22758.87 103427.72 00:08:19.685 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:19.685 Verification LBA range: start 0x4ff80 length 0x4ff80 00:08:19.685 Nvme1n1p1 : 5.07 1362.46 5.32 0.00 0.00 93620.59 16324.42 87699.08 00:08:19.685 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:19.685 Verification LBA range: start 0x0 length 0x4ff7f 00:08:19.685 Nvme1n1p2 : 5.07 1312.55 5.13 0.00 0.00 96898.03 22639.71 98661.47 00:08:19.685 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:19.685 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:08:19.685 Nvme1n1p2 : 5.08 1361.67 5.32 0.00 0.00 93457.66 16801.05 85792.58 00:08:19.685 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:19.685 Verification LBA range: start 0x0 length 0x80000 00:08:19.685 Nvme2n1 : 5.07 1311.90 5.12 0.00 0.00 96752.36 23235.49 96278.34 00:08:19.685 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:19.685 Verification LBA range: start 0x80000 length 0x80000 00:08:19.685 Nvme2n1 : 5.08 1360.84 5.32 0.00 0.00 93323.96 18350.08 81026.33 00:08:19.685 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:19.685 Verification LBA range: start 0x0 length 0x80000 00:08:19.685 Nvme2n2 : 5.08 1311.17 5.12 0.00 0.00 96622.93 23831.27 96754.97 00:08:19.685 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:19.685 Verification LBA range: start 0x80000 length 0x80000 00:08:19.685 Nvme2n2 : 5.08 1360.02 5.31 0.00 0.00 93196.54 20018.27 84362.71 00:08:19.685 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:19.685 Verification LBA range: start 0x0 length 0x80000 00:08:19.685 Nvme2n3 : 5.08 1310.36 5.12 0.00 0.00 96490.76 19184.17 101997.85 00:08:19.685 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:19.685 Verification LBA range: start 0x80000 length 0x80000 00:08:19.685 Nvme2n3 : 5.08 1359.58 5.31 0.00 0.00 93055.16 19065.02 87699.08 00:08:19.685 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:19.685 Verification LBA range: start 0x0 length 0x20000 00:08:19.685 Nvme3n1 : 5.09 1321.01 5.16 0.00 0.00 95666.61 2263.97 106764.10 00:08:19.685 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:19.685 Verification LBA range: start 0x20000 length 0x20000 00:08:19.685 Nvme3n1 : 5.09 1359.17 5.31 0.00 0.00 92913.23 17396.83 91035.46 00:08:19.685 =================================================================================================================== 00:08:19.685 Total : 18720.47 73.13 0.00 0.00 94963.68 2263.97 106764.10 00:08:21.058 00:08:21.058 real 0m7.702s 00:08:21.058 user 0m14.074s 00:08:21.058 sys 0m0.242s 00:08:21.058 13:04:13 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.058 13:04:13 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:21.058 ************************************ 00:08:21.058 END TEST bdev_verify 00:08:21.058 ************************************ 00:08:21.058 13:04:13 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:21.058 13:04:13 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:08:21.058 13:04:13 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.058 13:04:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:21.058 ************************************ 00:08:21.058 START TEST bdev_verify_big_io 00:08:21.058 ************************************ 00:08:21.058 13:04:13 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:21.315 [2024-07-25 13:04:13.302703] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:21.315 [2024-07-25 13:04:13.302876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67273 ] 00:08:21.315 [2024-07-25 13:04:13.474543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:21.573 [2024-07-25 13:04:13.670295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.573 [2024-07-25 13:04:13.670302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.510 Running I/O for 5 seconds... 00:08:29.065 00:08:29.065 Latency(us) 00:08:29.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.065 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:29.065 Verification LBA range: start 0x0 length 0xbd0b 00:08:29.065 Nvme0n1 : 5.69 106.86 6.68 0.00 0.00 1149211.68 23831.27 1296421.24 00:08:29.065 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:29.065 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:29.065 Nvme0n1 : 5.70 101.08 6.32 0.00 0.00 1217348.89 28716.68 1296421.24 00:08:29.065 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:29.065 Verification LBA range: start 0x0 length 0x4ff8 00:08:29.065 Nvme1n1p1 : 5.80 110.28 6.89 0.00 0.00 1084172.10 98661.47 1105771.05 00:08:29.065 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:29.065 Verification LBA range: start 0x4ff8 length 0x4ff8 00:08:29.065 Nvme1n1p1 : 5.94 102.65 6.42 0.00 0.00 1147804.34 109147.23 1113397.06 00:08:29.065 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:29.065 Verification LBA range: start 0x0 length 0x4ff7 00:08:29.065 Nvme1n1p2 : 5.92 112.75 7.05 0.00 0.00 1025947.01 108670.60 953250.91 00:08:29.065 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:29.065 Verification LBA range: start 0x4ff7 length 0x4ff7 00:08:29.065 Nvme1n1p2 : 5.94 103.30 6.46 0.00 0.00 1108103.72 109623.85 1006632.96 00:08:29.065 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:29.065 Verification LBA range: start 0x0 length 0x8000 00:08:29.065 Nvme2n1 : 5.92 112.88 7.05 0.00 0.00 990747.35 109623.85 983754.94 00:08:29.065 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:29.065 Verification LBA range: start 0x8000 length 0x8000 00:08:29.065 Nvme2n1 : 5.95 107.62 6.73 0.00 0.00 1045796.68 130595.37 1052389.00 00:08:29.065 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:29.065 Verification LBA range: start 0x0 length 0x8000 00:08:29.065 Nvme2n2 : 6.10 121.38 7.59 0.00 0.00 894486.56 58386.62 1021884.97 00:08:29.065 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:29.065 Verification LBA range: start 0x8000 length 0x8000 00:08:29.065 Nvme2n2 : 6.11 115.30 7.21 0.00 0.00 951343.90 52190.49 1075267.03 00:08:29.065 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:29.065 Verification LBA range: start 0x0 length 0x8000 00:08:29.065 Nvme2n3 : 6.11 125.72 7.86 0.00 0.00 841315.61 58148.31 1052389.00 00:08:29.065 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:29.065 Verification LBA range: start 0x8000 length 0x8000 00:08:29.065 Nvme2n3 : 6.15 120.38 7.52 0.00 0.00 884417.82 31218.97 1113397.06 00:08:29.065 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:29.065 Verification LBA range: start 0x0 length 0x2000 00:08:29.065 Nvme3n1 : 6.17 91.43 5.71 0.00 0.00 1125855.24 9175.04 2379314.27 00:08:29.065 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:29.065 Verification LBA range: start 0x2000 length 0x2000 00:08:29.065 Nvme3n1 : 6.16 87.88 5.49 0.00 0.00 1175617.83 4498.15 2562338.44 00:08:29.065 =================================================================================================================== 00:08:29.065 Total : 1519.51 94.97 0.00 0.00 1034457.41 4498.15 2562338.44 00:08:30.440 00:08:30.440 real 0m9.279s 00:08:30.440 user 0m17.110s 00:08:30.440 sys 0m0.290s 00:08:30.440 13:04:22 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.440 13:04:22 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:30.440 ************************************ 00:08:30.440 END TEST bdev_verify_big_io 00:08:30.440 ************************************ 00:08:30.440 13:04:22 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:30.440 13:04:22 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:30.440 13:04:22 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.440 13:04:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:30.440 ************************************ 00:08:30.440 START TEST bdev_write_zeroes 00:08:30.440 ************************************ 00:08:30.440 13:04:22 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:30.440 [2024-07-25 13:04:22.586253] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:30.440 [2024-07-25 13:04:22.586403] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67388 ] 00:08:30.699 [2024-07-25 13:04:22.754948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.957 [2024-07-25 13:04:23.021746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.523 Running I/O for 1 seconds... 00:08:32.897 00:08:32.897 Latency(us) 00:08:32.897 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:32.897 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:32.897 Nvme0n1 : 1.03 6589.60 25.74 0.00 0.00 19337.98 14656.23 31457.28 00:08:32.897 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:32.897 Nvme1n1p1 : 1.02 6565.57 25.65 0.00 0.00 19368.11 14596.65 34317.03 00:08:32.897 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:32.897 Nvme1n1p2 : 1.03 6553.41 25.60 0.00 0.00 19349.96 14537.08 28597.53 00:08:32.897 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:32.897 Nvme2n1 : 1.03 6578.02 25.70 0.00 0.00 19233.21 13226.36 27644.28 00:08:32.897 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:32.897 Nvme2n2 : 1.03 6567.11 25.65 0.00 0.00 19223.40 12868.89 27882.59 00:08:32.897 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:32.897 Nvme2n3 : 1.03 6556.25 25.61 0.00 0.00 19216.05 12451.84 28240.06 00:08:32.897 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:32.897 Nvme3n1 : 1.04 6546.98 25.57 0.00 0.00 19172.71 10545.34 30980.65 00:08:32.897 =================================================================================================================== 00:08:32.897 Total : 45956.93 179.52 0.00 0.00 19271.39 10545.34 34317.03 00:08:33.831 00:08:33.831 real 0m3.368s 00:08:33.831 user 0m3.015s 00:08:33.831 sys 0m0.221s 00:08:33.831 13:04:25 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:33.831 13:04:25 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:33.831 ************************************ 00:08:33.831 END TEST bdev_write_zeroes 00:08:33.831 ************************************ 00:08:33.831 13:04:25 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:33.831 13:04:25 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:33.831 13:04:25 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:33.831 13:04:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:33.831 ************************************ 00:08:33.831 START TEST bdev_json_nonenclosed 00:08:33.831 ************************************ 00:08:33.831 13:04:25 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:33.831 [2024-07-25 13:04:26.007879] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:33.831 [2024-07-25 13:04:26.008034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67447 ] 00:08:34.089 [2024-07-25 13:04:26.172700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.359 [2024-07-25 13:04:26.360255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.359 [2024-07-25 13:04:26.360365] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:34.359 [2024-07-25 13:04:26.360397] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:34.359 [2024-07-25 13:04:26.360415] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:34.641 00:08:34.641 real 0m0.890s 00:08:34.641 user 0m0.657s 00:08:34.641 sys 0m0.126s 00:08:34.641 13:04:26 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.641 13:04:26 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:34.641 ************************************ 00:08:34.641 END TEST bdev_json_nonenclosed 00:08:34.641 ************************************ 00:08:34.900 13:04:26 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:34.900 13:04:26 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:34.900 13:04:26 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.900 13:04:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:34.900 ************************************ 00:08:34.900 START TEST bdev_json_nonarray 00:08:34.900 ************************************ 00:08:34.900 13:04:26 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:34.900 [2024-07-25 13:04:26.957501] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:34.900 [2024-07-25 13:04:26.957672] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67472 ] 00:08:35.158 [2024-07-25 13:04:27.132299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.416 [2024-07-25 13:04:27.358083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.416 [2024-07-25 13:04:27.358223] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:35.416 [2024-07-25 13:04:27.358262] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:35.416 [2024-07-25 13:04:27.358281] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:35.674 00:08:35.674 real 0m0.949s 00:08:35.674 user 0m0.715s 00:08:35.674 sys 0m0.127s 00:08:35.675 13:04:27 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:35.675 13:04:27 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:35.675 ************************************ 00:08:35.675 END TEST bdev_json_nonarray 00:08:35.675 ************************************ 00:08:35.675 13:04:27 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:08:35.675 13:04:27 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:08:35.675 13:04:27 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:35.675 13:04:27 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:35.675 13:04:27 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:35.675 13:04:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:35.675 ************************************ 00:08:35.675 START TEST bdev_gpt_uuid 00:08:35.675 ************************************ 00:08:35.675 13:04:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:08:35.675 13:04:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:08:35.675 13:04:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:08:35.675 13:04:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=67503 00:08:35.675 13:04:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:35.675 13:04:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 67503 00:08:35.675 13:04:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:35.675 13:04:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 67503 ']' 00:08:35.675 13:04:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.675 13:04:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:35.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.675 13:04:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.675 13:04:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:35.675 13:04:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:35.932 [2024-07-25 13:04:27.971394] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:35.932 [2024-07-25 13:04:27.971570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67503 ] 00:08:36.190 [2024-07-25 13:04:28.139581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.190 [2024-07-25 13:04:28.362249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.123 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:37.123 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:08:37.123 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:37.123 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.123 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:37.381 Some configs were skipped because the RPC state that can call them passed over. 00:08:37.381 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.381 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:08:37.381 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.381 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:37.381 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.381 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:37.381 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.381 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:37.381 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.381 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:08:37.381 { 00:08:37.381 "name": "Nvme1n1p1", 00:08:37.381 "aliases": [ 00:08:37.381 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:37.381 ], 00:08:37.381 "product_name": "GPT Disk", 00:08:37.381 "block_size": 4096, 00:08:37.381 "num_blocks": 655104, 00:08:37.381 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:37.381 "assigned_rate_limits": { 00:08:37.381 "rw_ios_per_sec": 0, 00:08:37.381 "rw_mbytes_per_sec": 0, 00:08:37.381 "r_mbytes_per_sec": 0, 00:08:37.381 "w_mbytes_per_sec": 0 00:08:37.381 }, 00:08:37.381 "claimed": false, 00:08:37.381 "zoned": false, 00:08:37.381 "supported_io_types": { 00:08:37.381 "read": true, 00:08:37.381 "write": true, 00:08:37.381 "unmap": true, 00:08:37.381 "flush": true, 00:08:37.381 "reset": true, 00:08:37.381 "nvme_admin": false, 00:08:37.381 "nvme_io": false, 00:08:37.381 "nvme_io_md": false, 00:08:37.381 "write_zeroes": true, 00:08:37.381 "zcopy": false, 00:08:37.381 "get_zone_info": false, 00:08:37.381 "zone_management": false, 00:08:37.381 "zone_append": false, 00:08:37.381 "compare": true, 00:08:37.381 "compare_and_write": false, 00:08:37.381 "abort": true, 00:08:37.381 "seek_hole": false, 00:08:37.381 "seek_data": false, 00:08:37.381 "copy": true, 00:08:37.381 "nvme_iov_md": false 00:08:37.382 }, 00:08:37.382 "driver_specific": { 00:08:37.382 "gpt": { 00:08:37.382 "base_bdev": "Nvme1n1", 00:08:37.382 "offset_blocks": 256, 00:08:37.382 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:37.382 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:37.382 "partition_name": "SPDK_TEST_first" 00:08:37.382 } 00:08:37.382 } 00:08:37.382 } 00:08:37.382 ]' 00:08:37.382 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:08:37.382 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:08:37.382 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:08:37.382 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:37.382 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:37.639 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:37.639 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:37.639 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.639 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:37.639 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.639 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:08:37.639 { 00:08:37.639 "name": "Nvme1n1p2", 00:08:37.639 "aliases": [ 00:08:37.639 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:37.639 ], 00:08:37.639 "product_name": "GPT Disk", 00:08:37.639 "block_size": 4096, 00:08:37.639 "num_blocks": 655103, 00:08:37.639 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:37.639 "assigned_rate_limits": { 00:08:37.639 "rw_ios_per_sec": 0, 00:08:37.639 "rw_mbytes_per_sec": 0, 00:08:37.639 "r_mbytes_per_sec": 0, 00:08:37.639 "w_mbytes_per_sec": 0 00:08:37.639 }, 00:08:37.639 "claimed": false, 00:08:37.639 "zoned": false, 00:08:37.639 "supported_io_types": { 00:08:37.639 "read": true, 00:08:37.639 "write": true, 00:08:37.639 "unmap": true, 00:08:37.639 "flush": true, 00:08:37.639 "reset": true, 00:08:37.639 "nvme_admin": false, 00:08:37.639 "nvme_io": false, 00:08:37.639 "nvme_io_md": false, 00:08:37.639 "write_zeroes": true, 00:08:37.639 "zcopy": false, 00:08:37.639 "get_zone_info": false, 00:08:37.639 "zone_management": false, 00:08:37.639 "zone_append": false, 00:08:37.639 "compare": true, 00:08:37.639 "compare_and_write": false, 00:08:37.639 "abort": true, 00:08:37.639 "seek_hole": false, 00:08:37.639 "seek_data": false, 00:08:37.639 "copy": true, 00:08:37.639 "nvme_iov_md": false 00:08:37.639 }, 00:08:37.639 "driver_specific": { 00:08:37.639 "gpt": { 00:08:37.639 "base_bdev": "Nvme1n1", 00:08:37.639 "offset_blocks": 655360, 00:08:37.639 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:37.640 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:37.640 "partition_name": "SPDK_TEST_second" 00:08:37.640 } 00:08:37.640 } 00:08:37.640 } 00:08:37.640 ]' 00:08:37.640 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:08:37.640 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:08:37.640 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:08:37.640 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:37.640 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:37.640 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:37.640 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 67503 00:08:37.640 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 67503 ']' 00:08:37.640 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 67503 00:08:37.640 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:08:37.640 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:37.640 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67503 00:08:37.640 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:37.640 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:37.640 killing process with pid 67503 00:08:37.640 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67503' 00:08:37.640 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 67503 00:08:37.640 13:04:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 67503 00:08:40.198 00:08:40.198 real 0m4.032s 00:08:40.198 user 0m4.441s 00:08:40.198 sys 0m0.436s 00:08:40.198 13:04:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:40.198 13:04:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:40.198 ************************************ 00:08:40.198 END TEST bdev_gpt_uuid 00:08:40.198 ************************************ 00:08:40.198 13:04:31 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:08:40.198 13:04:31 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:08:40.198 13:04:31 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:08:40.198 13:04:31 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:40.198 13:04:31 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:40.198 13:04:31 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:40.198 13:04:31 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:40.198 13:04:31 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:40.198 13:04:31 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:40.198 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:40.457 Waiting for block devices as requested 00:08:40.457 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:40.457 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:40.457 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:40.716 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:45.986 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:45.986 13:04:37 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:45.986 13:04:37 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:45.986 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:45.986 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:45.986 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:45.986 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:45.986 13:04:38 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:45.986 00:08:45.986 real 1m7.937s 00:08:45.986 user 1m28.644s 00:08:45.986 sys 0m10.144s 00:08:45.986 13:04:38 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:45.986 13:04:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:45.986 ************************************ 00:08:45.986 END TEST blockdev_nvme_gpt 00:08:45.986 ************************************ 00:08:45.986 13:04:38 -- spdk/autotest.sh@220 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:45.986 13:04:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:45.986 13:04:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:45.986 13:04:38 -- common/autotest_common.sh@10 -- # set +x 00:08:45.986 ************************************ 00:08:45.986 START TEST nvme 00:08:45.986 ************************************ 00:08:45.986 13:04:38 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:45.986 * Looking for test storage... 00:08:45.986 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:45.986 13:04:38 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:46.553 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:47.130 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:47.130 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:47.130 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:47.130 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:47.406 13:04:39 nvme -- nvme/nvme.sh@79 -- # uname 00:08:47.406 13:04:39 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:47.406 13:04:39 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:47.406 13:04:39 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:47.406 13:04:39 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:47.406 13:04:39 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:08:47.406 13:04:39 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:08:47.406 13:04:39 nvme -- common/autotest_common.sh@1071 -- # stubpid=68148 00:08:47.406 13:04:39 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:47.406 Waiting for stub to ready for secondary processes... 00:08:47.406 13:04:39 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:08:47.406 13:04:39 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:47.406 13:04:39 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/68148 ]] 00:08:47.406 13:04:39 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:08:47.406 [2024-07-25 13:04:39.460729] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:47.407 [2024-07-25 13:04:39.460921] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:48.349 [2024-07-25 13:04:40.240101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:48.349 13:04:40 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:48.349 13:04:40 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/68148 ]] 00:08:48.349 13:04:40 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:08:48.349 [2024-07-25 13:04:40.463036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:48.349 [2024-07-25 13:04:40.463200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:48.349 [2024-07-25 13:04:40.463647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.349 [2024-07-25 13:04:40.485219] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:48.349 [2024-07-25 13:04:40.485306] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:48.349 [2024-07-25 13:04:40.494241] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:48.349 [2024-07-25 13:04:40.494410] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:48.349 [2024-07-25 13:04:40.497436] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:48.349 [2024-07-25 13:04:40.497749] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:48.349 [2024-07-25 13:04:40.497881] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:48.349 [2024-07-25 13:04:40.500941] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:48.349 [2024-07-25 13:04:40.501242] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:48.349 [2024-07-25 13:04:40.501379] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:48.349 [2024-07-25 13:04:40.504467] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:48.349 [2024-07-25 13:04:40.504796] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:48.349 [2024-07-25 13:04:40.504908] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:48.349 [2024-07-25 13:04:40.504990] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:48.349 [2024-07-25 13:04:40.505204] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:49.283 13:04:41 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:49.283 done. 00:08:49.283 13:04:41 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:08:49.283 13:04:41 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:49.283 13:04:41 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:08:49.283 13:04:41 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:49.283 13:04:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:49.283 ************************************ 00:08:49.283 START TEST nvme_reset 00:08:49.283 ************************************ 00:08:49.283 13:04:41 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:49.849 Initializing NVMe Controllers 00:08:49.849 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:49.849 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:49.849 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:49.849 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:49.849 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:49.849 00:08:49.849 real 0m0.329s 00:08:49.849 user 0m0.120s 00:08:49.849 sys 0m0.159s 00:08:49.849 13:04:41 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:49.849 13:04:41 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:49.849 ************************************ 00:08:49.849 END TEST nvme_reset 00:08:49.849 ************************************ 00:08:49.849 13:04:41 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:49.849 13:04:41 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:49.849 13:04:41 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:49.849 13:04:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:49.849 ************************************ 00:08:49.849 START TEST nvme_identify 00:08:49.849 ************************************ 00:08:49.849 13:04:41 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:08:49.849 13:04:41 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:49.849 13:04:41 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:49.849 13:04:41 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:49.849 13:04:41 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:49.849 13:04:41 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=() 00:08:49.849 13:04:41 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # local bdfs 00:08:49.849 13:04:41 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:49.849 13:04:41 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:49.849 13:04:41 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:08:49.849 13:04:41 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:08:49.849 13:04:41 nvme.nvme_identify -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:49.849 13:04:41 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:50.110 [2024-07-25 13:04:42.063599] nvme_ctrlr.c:3608:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 68182 terminated unexpected 00:08:50.110 ===================================================== 00:08:50.110 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:50.110 ===================================================== 00:08:50.110 Controller Capabilities/Features 00:08:50.110 ================================ 00:08:50.110 Vendor ID: 1b36 00:08:50.110 Subsystem Vendor ID: 1af4 00:08:50.110 Serial Number: 12340 00:08:50.110 Model Number: QEMU NVMe Ctrl 00:08:50.110 Firmware Version: 8.0.0 00:08:50.110 Recommended Arb Burst: 6 00:08:50.110 IEEE OUI Identifier: 00 54 52 00:08:50.110 Multi-path I/O 00:08:50.110 May have multiple subsystem ports: No 00:08:50.110 May have multiple controllers: No 00:08:50.110 Associated with SR-IOV VF: No 00:08:50.110 Max Data Transfer Size: 524288 00:08:50.110 Max Number of Namespaces: 256 00:08:50.110 Max Number of I/O Queues: 64 00:08:50.110 NVMe Specification Version (VS): 1.4 00:08:50.110 NVMe Specification Version (Identify): 1.4 00:08:50.110 Maximum Queue Entries: 2048 00:08:50.110 Contiguous Queues Required: Yes 00:08:50.110 Arbitration Mechanisms Supported 00:08:50.110 Weighted Round Robin: Not Supported 00:08:50.110 Vendor Specific: Not Supported 00:08:50.110 Reset Timeout: 7500 ms 00:08:50.110 Doorbell Stride: 4 bytes 00:08:50.110 NVM Subsystem Reset: Not Supported 00:08:50.110 Command Sets Supported 00:08:50.110 NVM Command Set: Supported 00:08:50.110 Boot Partition: Not Supported 00:08:50.110 Memory Page Size Minimum: 4096 bytes 00:08:50.110 Memory Page Size Maximum: 65536 bytes 00:08:50.110 Persistent Memory Region: Not Supported 00:08:50.110 Optional Asynchronous Events Supported 00:08:50.110 Namespace Attribute Notices: Supported 00:08:50.110 Firmware Activation Notices: Not Supported 00:08:50.110 ANA Change Notices: Not Supported 00:08:50.110 PLE Aggregate Log Change Notices: Not Supported 00:08:50.110 LBA Status Info Alert Notices: Not Supported 00:08:50.110 EGE Aggregate Log Change Notices: Not Supported 00:08:50.110 Normal NVM Subsystem Shutdown event: Not Supported 00:08:50.110 Zone Descriptor Change Notices: Not Supported 00:08:50.110 Discovery Log Change Notices: Not Supported 00:08:50.110 Controller Attributes 00:08:50.110 128-bit Host Identifier: Not Supported 00:08:50.110 Non-Operational Permissive Mode: Not Supported 00:08:50.110 NVM Sets: Not Supported 00:08:50.110 Read Recovery Levels: Not Supported 00:08:50.110 Endurance Groups: Not Supported 00:08:50.110 Predictable Latency Mode: Not Supported 00:08:50.110 Traffic Based Keep ALive: Not Supported 00:08:50.110 Namespace Granularity: Not Supported 00:08:50.110 SQ Associations: Not Supported 00:08:50.110 UUID List: Not Supported 00:08:50.110 Multi-Domain Subsystem: Not Supported 00:08:50.110 Fixed Capacity Management: Not Supported 00:08:50.110 Variable Capacity Management: Not Supported 00:08:50.110 Delete Endurance Group: Not Supported 00:08:50.110 Delete NVM Set: Not Supported 00:08:50.110 Extended LBA Formats Supported: Supported 00:08:50.110 Flexible Data Placement Supported: Not Supported 00:08:50.110 00:08:50.110 Controller Memory Buffer Support 00:08:50.110 ================================ 00:08:50.110 Supported: No 00:08:50.110 00:08:50.110 Persistent Memory Region Support 00:08:50.110 ================================ 00:08:50.110 Supported: No 00:08:50.110 00:08:50.110 Admin Command Set Attributes 00:08:50.110 ============================ 00:08:50.110 Security Send/Receive: Not Supported 00:08:50.110 Format NVM: Supported 00:08:50.110 Firmware Activate/Download: Not Supported 00:08:50.110 Namespace Management: Supported 00:08:50.110 Device Self-Test: Not Supported 00:08:50.110 Directives: Supported 00:08:50.110 NVMe-MI: Not Supported 00:08:50.110 Virtualization Management: Not Supported 00:08:50.110 Doorbell Buffer Config: Supported 00:08:50.110 Get LBA Status Capability: Not Supported 00:08:50.110 Command & Feature Lockdown Capability: Not Supported 00:08:50.110 Abort Command Limit: 4 00:08:50.110 Async Event Request Limit: 4 00:08:50.110 Number of Firmware Slots: N/A 00:08:50.110 Firmware Slot 1 Read-Only: N/A 00:08:50.110 Firmware Activation Without Reset: N/A 00:08:50.110 Multiple Update Detection Support: N/A 00:08:50.110 Firmware Update Granularity: No Information Provided 00:08:50.111 Per-Namespace SMART Log: Yes 00:08:50.111 Asymmetric Namespace Access Log Page: Not Supported 00:08:50.111 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:50.111 Command Effects Log Page: Supported 00:08:50.111 Get Log Page Extended Data: Supported 00:08:50.111 Telemetry Log Pages: Not Supported 00:08:50.111 Persistent Event Log Pages: Not Supported 00:08:50.111 Supported Log Pages Log Page: May Support 00:08:50.111 Commands Supported & Effects Log Page: Not Supported 00:08:50.111 Feature Identifiers & Effects Log Page:May Support 00:08:50.111 NVMe-MI Commands & Effects Log Page: May Support 00:08:50.111 Data Area 4 for Telemetry Log: Not Supported 00:08:50.111 Error Log Page Entries Supported: 1 00:08:50.111 Keep Alive: Not Supported 00:08:50.111 00:08:50.111 NVM Command Set Attributes 00:08:50.111 ========================== 00:08:50.111 Submission Queue Entry Size 00:08:50.111 Max: 64 00:08:50.111 Min: 64 00:08:50.111 Completion Queue Entry Size 00:08:50.111 Max: 16 00:08:50.111 Min: 16 00:08:50.111 Number of Namespaces: 256 00:08:50.111 Compare Command: Supported 00:08:50.111 Write Uncorrectable Command: Not Supported 00:08:50.111 Dataset Management Command: Supported 00:08:50.111 Write Zeroes Command: Supported 00:08:50.111 Set Features Save Field: Supported 00:08:50.111 Reservations: Not Supported 00:08:50.111 Timestamp: Supported 00:08:50.111 Copy: Supported 00:08:50.111 Volatile Write Cache: Present 00:08:50.111 Atomic Write Unit (Normal): 1 00:08:50.111 Atomic Write Unit (PFail): 1 00:08:50.111 Atomic Compare & Write Unit: 1 00:08:50.111 Fused Compare & Write: Not Supported 00:08:50.111 Scatter-Gather List 00:08:50.111 SGL Command Set: Supported 00:08:50.111 SGL Keyed: Not Supported 00:08:50.111 SGL Bit Bucket Descriptor: Not Supported 00:08:50.111 SGL Metadata Pointer: Not Supported 00:08:50.111 Oversized SGL: Not Supported 00:08:50.111 SGL Metadata Address: Not Supported 00:08:50.111 SGL Offset: Not Supported 00:08:50.111 Transport SGL Data Block: Not Supported 00:08:50.111 Replay Protected Memory Block: Not Supported 00:08:50.111 00:08:50.111 Firmware Slot Information 00:08:50.111 ========================= 00:08:50.111 Active slot: 1 00:08:50.111 Slot 1 Firmware Revision: 1.0 00:08:50.111 00:08:50.111 00:08:50.111 Commands Supported and Effects 00:08:50.111 ============================== 00:08:50.111 Admin Commands 00:08:50.111 -------------- 00:08:50.111 Delete I/O Submission Queue (00h): Supported 00:08:50.111 Create I/O Submission Queue (01h): Supported 00:08:50.111 Get Log Page (02h): Supported 00:08:50.111 Delete I/O Completion Queue (04h): Supported 00:08:50.111 Create I/O Completion Queue (05h): Supported 00:08:50.111 Identify (06h): Supported 00:08:50.111 Abort (08h): Supported 00:08:50.111 Set Features (09h): Supported 00:08:50.111 Get Features (0Ah): Supported 00:08:50.111 Asynchronous Event Request (0Ch): Supported 00:08:50.111 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:50.111 Directive Send (19h): Supported 00:08:50.111 Directive Receive (1Ah): Supported 00:08:50.111 Virtualization Management (1Ch): Supported 00:08:50.111 Doorbell Buffer Config (7Ch): Supported 00:08:50.111 Format NVM (80h): Supported LBA-Change 00:08:50.111 I/O Commands 00:08:50.111 ------------ 00:08:50.111 Flush (00h): Supported LBA-Change 00:08:50.111 Write (01h): Supported LBA-Change 00:08:50.111 Read (02h): Supported 00:08:50.111 Compare (05h): Supported 00:08:50.111 Write Zeroes (08h): Supported LBA-Change 00:08:50.111 Dataset Management (09h): Supported LBA-Change 00:08:50.111 Unknown (0Ch): Supported 00:08:50.111 Unknown (12h): Supported 00:08:50.111 Copy (19h): Supported LBA-Change 00:08:50.111 Unknown (1Dh): Supported LBA-Change 00:08:50.111 00:08:50.111 Error Log 00:08:50.111 ========= 00:08:50.111 00:08:50.111 Arbitration 00:08:50.111 =========== 00:08:50.111 Arbitration Burst: no limit 00:08:50.111 00:08:50.111 Power Management 00:08:50.111 ================ 00:08:50.111 Number of Power States: 1 00:08:50.111 Current Power State: Power State #0 00:08:50.111 Power State #0: 00:08:50.111 Max Power: 25.00 W 00:08:50.111 Non-Operational State: Operational 00:08:50.111 Entry Latency: 16 microseconds 00:08:50.111 Exit Latency: 4 microseconds 00:08:50.111 Relative Read Throughput: 0 00:08:50.111 Relative Read Latency: 0 00:08:50.111 Relative Write Throughput: 0 00:08:50.111 Relative Write Latency: 0 00:08:50.111 Idle Power[2024-07-25 13:04:42.064736] nvme_ctrlr.c:3608:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 68182 terminated unexpected 00:08:50.111 : Not Reported 00:08:50.111 Active Power: Not Reported 00:08:50.111 Non-Operational Permissive Mode: Not Supported 00:08:50.111 00:08:50.111 Health Information 00:08:50.111 ================== 00:08:50.111 Critical Warnings: 00:08:50.111 Available Spare Space: OK 00:08:50.111 Temperature: OK 00:08:50.111 Device Reliability: OK 00:08:50.111 Read Only: No 00:08:50.111 Volatile Memory Backup: OK 00:08:50.111 Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.111 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:50.111 Available Spare: 0% 00:08:50.111 Available Spare Threshold: 0% 00:08:50.111 Life Percentage Used: 0% 00:08:50.111 Data Units Read: 666 00:08:50.111 Data Units Written: 557 00:08:50.111 Host Read Commands: 33042 00:08:50.111 Host Write Commands: 32080 00:08:50.111 Controller Busy Time: 0 minutes 00:08:50.111 Power Cycles: 0 00:08:50.111 Power On Hours: 0 hours 00:08:50.111 Unsafe Shutdowns: 0 00:08:50.111 Unrecoverable Media Errors: 0 00:08:50.111 Lifetime Error Log Entries: 0 00:08:50.111 Warning Temperature Time: 0 minutes 00:08:50.111 Critical Temperature Time: 0 minutes 00:08:50.111 00:08:50.111 Number of Queues 00:08:50.111 ================ 00:08:50.111 Number of I/O Submission Queues: 64 00:08:50.111 Number of I/O Completion Queues: 64 00:08:50.111 00:08:50.111 ZNS Specific Controller Data 00:08:50.111 ============================ 00:08:50.111 Zone Append Size Limit: 0 00:08:50.111 00:08:50.111 00:08:50.111 Active Namespaces 00:08:50.111 ================= 00:08:50.111 Namespace ID:1 00:08:50.111 Error Recovery Timeout: Unlimited 00:08:50.111 Command Set Identifier: NVM (00h) 00:08:50.111 Deallocate: Supported 00:08:50.111 Deallocated/Unwritten Error: Supported 00:08:50.111 Deallocated Read Value: All 0x00 00:08:50.111 Deallocate in Write Zeroes: Not Supported 00:08:50.111 Deallocated Guard Field: 0xFFFF 00:08:50.111 Flush: Supported 00:08:50.111 Reservation: Not Supported 00:08:50.111 Metadata Transferred as: Separate Metadata Buffer 00:08:50.111 Namespace Sharing Capabilities: Private 00:08:50.111 Size (in LBAs): 1548666 (5GiB) 00:08:50.111 Capacity (in LBAs): 1548666 (5GiB) 00:08:50.111 Utilization (in LBAs): 1548666 (5GiB) 00:08:50.111 Thin Provisioning: Not Supported 00:08:50.111 Per-NS Atomic Units: No 00:08:50.111 Maximum Single Source Range Length: 128 00:08:50.111 Maximum Copy Length: 128 00:08:50.111 Maximum Source Range Count: 128 00:08:50.111 NGUID/EUI64 Never Reused: No 00:08:50.111 Namespace Write Protected: No 00:08:50.111 Number of LBA Formats: 8 00:08:50.111 Current LBA Format: LBA Format #07 00:08:50.111 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:50.111 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:50.111 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:50.111 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:50.111 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:50.111 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:50.111 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:50.111 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:50.111 00:08:50.111 NVM Specific Namespace Data 00:08:50.111 =========================== 00:08:50.111 Logical Block Storage Tag Mask: 0 00:08:50.111 Protection Information Capabilities: 00:08:50.111 16b Guard Protection Information Storage Tag Support: No 00:08:50.111 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:50.111 Storage Tag Check Read Support: No 00:08:50.111 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.111 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.111 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.111 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.111 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.111 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.112 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.112 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.112 ===================================================== 00:08:50.112 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:50.112 ===================================================== 00:08:50.112 Controller Capabilities/Features 00:08:50.112 ================================ 00:08:50.112 Vendor ID: 1b36 00:08:50.112 Subsystem Vendor ID: 1af4 00:08:50.112 Serial Number: 12341 00:08:50.112 Model Number: QEMU NVMe Ctrl 00:08:50.112 Firmware Version: 8.0.0 00:08:50.112 Recommended Arb Burst: 6 00:08:50.112 IEEE OUI Identifier: 00 54 52 00:08:50.112 Multi-path I/O 00:08:50.112 May have multiple subsystem ports: No 00:08:50.112 May have multiple controllers: No 00:08:50.112 Associated with SR-IOV VF: No 00:08:50.112 Max Data Transfer Size: 524288 00:08:50.112 Max Number of Namespaces: 256 00:08:50.112 Max Number of I/O Queues: 64 00:08:50.112 NVMe Specification Version (VS): 1.4 00:08:50.112 NVMe Specification Version (Identify): 1.4 00:08:50.112 Maximum Queue Entries: 2048 00:08:50.112 Contiguous Queues Required: Yes 00:08:50.112 Arbitration Mechanisms Supported 00:08:50.112 Weighted Round Robin: Not Supported 00:08:50.112 Vendor Specific: Not Supported 00:08:50.112 Reset Timeout: 7500 ms 00:08:50.112 Doorbell Stride: 4 bytes 00:08:50.112 NVM Subsystem Reset: Not Supported 00:08:50.112 Command Sets Supported 00:08:50.112 NVM Command Set: Supported 00:08:50.112 Boot Partition: Not Supported 00:08:50.112 Memory Page Size Minimum: 4096 bytes 00:08:50.112 Memory Page Size Maximum: 65536 bytes 00:08:50.112 Persistent Memory Region: Not Supported 00:08:50.112 Optional Asynchronous Events Supported 00:08:50.112 Namespace Attribute Notices: Supported 00:08:50.112 Firmware Activation Notices: Not Supported 00:08:50.112 ANA Change Notices: Not Supported 00:08:50.112 PLE Aggregate Log Change Notices: Not Supported 00:08:50.112 LBA Status Info Alert Notices: Not Supported 00:08:50.112 EGE Aggregate Log Change Notices: Not Supported 00:08:50.112 Normal NVM Subsystem Shutdown event: Not Supported 00:08:50.112 Zone Descriptor Change Notices: Not Supported 00:08:50.112 Discovery Log Change Notices: Not Supported 00:08:50.112 Controller Attributes 00:08:50.112 128-bit Host Identifier: Not Supported 00:08:50.112 Non-Operational Permissive Mode: Not Supported 00:08:50.112 NVM Sets: Not Supported 00:08:50.112 Read Recovery Levels: Not Supported 00:08:50.112 Endurance Groups: Not Supported 00:08:50.112 Predictable Latency Mode: Not Supported 00:08:50.112 Traffic Based Keep ALive: Not Supported 00:08:50.112 Namespace Granularity: Not Supported 00:08:50.112 SQ Associations: Not Supported 00:08:50.112 UUID List: Not Supported 00:08:50.112 Multi-Domain Subsystem: Not Supported 00:08:50.112 Fixed Capacity Management: Not Supported 00:08:50.112 Variable Capacity Management: Not Supported 00:08:50.112 Delete Endurance Group: Not Supported 00:08:50.112 Delete NVM Set: Not Supported 00:08:50.112 Extended LBA Formats Supported: Supported 00:08:50.112 Flexible Data Placement Supported: Not Supported 00:08:50.112 00:08:50.112 Controller Memory Buffer Support 00:08:50.112 ================================ 00:08:50.112 Supported: No 00:08:50.112 00:08:50.112 Persistent Memory Region Support 00:08:50.112 ================================ 00:08:50.112 Supported: No 00:08:50.112 00:08:50.112 Admin Command Set Attributes 00:08:50.112 ============================ 00:08:50.112 Security Send/Receive: Not Supported 00:08:50.112 Format NVM: Supported 00:08:50.112 Firmware Activate/Download: Not Supported 00:08:50.112 Namespace Management: Supported 00:08:50.112 Device Self-Test: Not Supported 00:08:50.112 Directives: Supported 00:08:50.112 NVMe-MI: Not Supported 00:08:50.112 Virtualization Management: Not Supported 00:08:50.112 Doorbell Buffer Config: Supported 00:08:50.112 Get LBA Status Capability: Not Supported 00:08:50.112 Command & Feature Lockdown Capability: Not Supported 00:08:50.112 Abort Command Limit: 4 00:08:50.112 Async Event Request Limit: 4 00:08:50.112 Number of Firmware Slots: N/A 00:08:50.112 Firmware Slot 1 Read-Only: N/A 00:08:50.112 Firmware Activation Without Reset: N/A 00:08:50.112 Multiple Update Detection Support: N/A 00:08:50.112 Firmware Update Granularity: No Information Provided 00:08:50.112 Per-Namespace SMART Log: Yes 00:08:50.112 Asymmetric Namespace Access Log Page: Not Supported 00:08:50.112 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:50.112 Command Effects Log Page: Supported 00:08:50.112 Get Log Page Extended Data: Supported 00:08:50.112 Telemetry Log Pages: Not Supported 00:08:50.112 Persistent Event Log Pages: Not Supported 00:08:50.112 Supported Log Pages Log Page: May Support 00:08:50.112 Commands Supported & Effects Log Page: Not Supported 00:08:50.112 Feature Identifiers & Effects Log Page:May Support 00:08:50.112 NVMe-MI Commands & Effects Log Page: May Support 00:08:50.112 Data Area 4 for Telemetry Log: Not Supported 00:08:50.112 Error Log Page Entries Supported: 1 00:08:50.112 Keep Alive: Not Supported 00:08:50.112 00:08:50.112 NVM Command Set Attributes 00:08:50.112 ========================== 00:08:50.112 Submission Queue Entry Size 00:08:50.112 Max: 64 00:08:50.112 Min: 64 00:08:50.112 Completion Queue Entry Size 00:08:50.112 Max: 16 00:08:50.112 Min: 16 00:08:50.112 Number of Namespaces: 256 00:08:50.112 Compare Command: Supported 00:08:50.112 Write Uncorrectable Command: Not Supported 00:08:50.112 Dataset Management Command: Supported 00:08:50.112 Write Zeroes Command: Supported 00:08:50.112 Set Features Save Field: Supported 00:08:50.112 Reservations: Not Supported 00:08:50.112 Timestamp: Supported 00:08:50.112 Copy: Supported 00:08:50.112 Volatile Write Cache: Present 00:08:50.112 Atomic Write Unit (Normal): 1 00:08:50.112 Atomic Write Unit (PFail): 1 00:08:50.112 Atomic Compare & Write Unit: 1 00:08:50.112 Fused Compare & Write: Not Supported 00:08:50.112 Scatter-Gather List 00:08:50.112 SGL Command Set: Supported 00:08:50.112 SGL Keyed: Not Supported 00:08:50.112 SGL Bit Bucket Descriptor: Not Supported 00:08:50.112 SGL Metadata Pointer: Not Supported 00:08:50.112 Oversized SGL: Not Supported 00:08:50.112 SGL Metadata Address: Not Supported 00:08:50.112 SGL Offset: Not Supported 00:08:50.112 Transport SGL Data Block: Not Supported 00:08:50.112 Replay Protected Memory Block: Not Supported 00:08:50.112 00:08:50.112 Firmware Slot Information 00:08:50.112 ========================= 00:08:50.112 Active slot: 1 00:08:50.112 Slot 1 Firmware Revision: 1.0 00:08:50.112 00:08:50.112 00:08:50.112 Commands Supported and Effects 00:08:50.112 ============================== 00:08:50.112 Admin Commands 00:08:50.112 -------------- 00:08:50.112 Delete I/O Submission Queue (00h): Supported 00:08:50.112 Create I/O Submission Queue (01h): Supported 00:08:50.112 Get Log Page (02h): Supported 00:08:50.112 Delete I/O Completion Queue (04h): Supported 00:08:50.112 Create I/O Completion Queue (05h): Supported 00:08:50.112 Identify (06h): Supported 00:08:50.112 Abort (08h): Supported 00:08:50.112 Set Features (09h): Supported 00:08:50.112 Get Features (0Ah): Supported 00:08:50.112 Asynchronous Event Request (0Ch): Supported 00:08:50.112 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:50.112 Directive Send (19h): Supported 00:08:50.112 Directive Receive (1Ah): Supported 00:08:50.112 Virtualization Management (1Ch): Supported 00:08:50.112 Doorbell Buffer Config (7Ch): Supported 00:08:50.112 Format NVM (80h): Supported LBA-Change 00:08:50.112 I/O Commands 00:08:50.112 ------------ 00:08:50.112 Flush (00h): Supported LBA-Change 00:08:50.112 Write (01h): Supported LBA-Change 00:08:50.112 Read (02h): Supported 00:08:50.112 Compare (05h): Supported 00:08:50.112 Write Zeroes (08h): Supported LBA-Change 00:08:50.112 Dataset Management (09h): Supported LBA-Change 00:08:50.112 Unknown (0Ch): Supported 00:08:50.112 Unknown (12h): Supported 00:08:50.112 Copy (19h): Supported LBA-Change 00:08:50.112 Unknown (1Dh): Supported LBA-Change 00:08:50.112 00:08:50.112 Error Log 00:08:50.112 ========= 00:08:50.112 00:08:50.112 Arbitration 00:08:50.112 =========== 00:08:50.112 Arbitration Burst: no limit 00:08:50.112 00:08:50.112 Power Management 00:08:50.112 ================ 00:08:50.112 Number of Power States: 1 00:08:50.112 Current Power State: Power State #0 00:08:50.112 Power State #0: 00:08:50.112 Max Power: 25.00 W 00:08:50.112 Non-Operational State: Operational 00:08:50.113 Entry Latency: 16 microseconds 00:08:50.113 Exit Latency: 4 microseconds 00:08:50.113 Relative Read Throughput: 0 00:08:50.113 Relative Read Latency: 0 00:08:50.113 Relative Write Throughput: 0 00:08:50.113 Relative Write Latency: 0 00:08:50.113 Idle Power: Not Reported 00:08:50.113 Active Power: Not Reported 00:08:50.113 Non-Operational Permissive Mode: Not Supported 00:08:50.113 00:08:50.113 Health Information 00:08:50.113 ================== 00:08:50.113 Critical Warnings: 00:08:50.113 Available Spare Space: OK 00:08:50.113 Temperature: [2024-07-25 13:04:42.065713] nvme_ctrlr.c:3608:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 68182 terminated unexpected 00:08:50.113 OK 00:08:50.113 Device Reliability: OK 00:08:50.113 Read Only: No 00:08:50.113 Volatile Memory Backup: OK 00:08:50.113 Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.113 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:50.113 Available Spare: 0% 00:08:50.113 Available Spare Threshold: 0% 00:08:50.113 Life Percentage Used: 0% 00:08:50.113 Data Units Read: 1068 00:08:50.113 Data Units Written: 859 00:08:50.113 Host Read Commands: 50119 00:08:50.113 Host Write Commands: 47274 00:08:50.113 Controller Busy Time: 0 minutes 00:08:50.113 Power Cycles: 0 00:08:50.113 Power On Hours: 0 hours 00:08:50.113 Unsafe Shutdowns: 0 00:08:50.113 Unrecoverable Media Errors: 0 00:08:50.113 Lifetime Error Log Entries: 0 00:08:50.113 Warning Temperature Time: 0 minutes 00:08:50.113 Critical Temperature Time: 0 minutes 00:08:50.113 00:08:50.113 Number of Queues 00:08:50.113 ================ 00:08:50.113 Number of I/O Submission Queues: 64 00:08:50.113 Number of I/O Completion Queues: 64 00:08:50.113 00:08:50.113 ZNS Specific Controller Data 00:08:50.113 ============================ 00:08:50.113 Zone Append Size Limit: 0 00:08:50.113 00:08:50.113 00:08:50.113 Active Namespaces 00:08:50.113 ================= 00:08:50.113 Namespace ID:1 00:08:50.113 Error Recovery Timeout: Unlimited 00:08:50.113 Command Set Identifier: NVM (00h) 00:08:50.113 Deallocate: Supported 00:08:50.113 Deallocated/Unwritten Error: Supported 00:08:50.113 Deallocated Read Value: All 0x00 00:08:50.113 Deallocate in Write Zeroes: Not Supported 00:08:50.113 Deallocated Guard Field: 0xFFFF 00:08:50.113 Flush: Supported 00:08:50.113 Reservation: Not Supported 00:08:50.113 Namespace Sharing Capabilities: Private 00:08:50.113 Size (in LBAs): 1310720 (5GiB) 00:08:50.113 Capacity (in LBAs): 1310720 (5GiB) 00:08:50.113 Utilization (in LBAs): 1310720 (5GiB) 00:08:50.113 Thin Provisioning: Not Supported 00:08:50.113 Per-NS Atomic Units: No 00:08:50.113 Maximum Single Source Range Length: 128 00:08:50.113 Maximum Copy Length: 128 00:08:50.113 Maximum Source Range Count: 128 00:08:50.113 NGUID/EUI64 Never Reused: No 00:08:50.113 Namespace Write Protected: No 00:08:50.113 Number of LBA Formats: 8 00:08:50.113 Current LBA Format: LBA Format #04 00:08:50.113 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:50.113 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:50.113 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:50.113 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:50.113 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:50.113 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:50.113 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:50.113 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:50.113 00:08:50.113 NVM Specific Namespace Data 00:08:50.113 =========================== 00:08:50.113 Logical Block Storage Tag Mask: 0 00:08:50.113 Protection Information Capabilities: 00:08:50.113 16b Guard Protection Information Storage Tag Support: No 00:08:50.113 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:50.113 Storage Tag Check Read Support: No 00:08:50.113 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.113 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.113 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.113 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.113 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.113 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.113 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.113 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.113 ===================================================== 00:08:50.113 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:50.113 ===================================================== 00:08:50.113 Controller Capabilities/Features 00:08:50.113 ================================ 00:08:50.113 Vendor ID: 1b36 00:08:50.113 Subsystem Vendor ID: 1af4 00:08:50.113 Serial Number: 12343 00:08:50.113 Model Number: QEMU NVMe Ctrl 00:08:50.113 Firmware Version: 8.0.0 00:08:50.113 Recommended Arb Burst: 6 00:08:50.113 IEEE OUI Identifier: 00 54 52 00:08:50.113 Multi-path I/O 00:08:50.113 May have multiple subsystem ports: No 00:08:50.113 May have multiple controllers: Yes 00:08:50.113 Associated with SR-IOV VF: No 00:08:50.113 Max Data Transfer Size: 524288 00:08:50.113 Max Number of Namespaces: 256 00:08:50.113 Max Number of I/O Queues: 64 00:08:50.113 NVMe Specification Version (VS): 1.4 00:08:50.113 NVMe Specification Version (Identify): 1.4 00:08:50.113 Maximum Queue Entries: 2048 00:08:50.113 Contiguous Queues Required: Yes 00:08:50.113 Arbitration Mechanisms Supported 00:08:50.113 Weighted Round Robin: Not Supported 00:08:50.113 Vendor Specific: Not Supported 00:08:50.113 Reset Timeout: 7500 ms 00:08:50.113 Doorbell Stride: 4 bytes 00:08:50.113 NVM Subsystem Reset: Not Supported 00:08:50.113 Command Sets Supported 00:08:50.113 NVM Command Set: Supported 00:08:50.113 Boot Partition: Not Supported 00:08:50.113 Memory Page Size Minimum: 4096 bytes 00:08:50.113 Memory Page Size Maximum: 65536 bytes 00:08:50.113 Persistent Memory Region: Not Supported 00:08:50.113 Optional Asynchronous Events Supported 00:08:50.113 Namespace Attribute Notices: Supported 00:08:50.113 Firmware Activation Notices: Not Supported 00:08:50.113 ANA Change Notices: Not Supported 00:08:50.113 PLE Aggregate Log Change Notices: Not Supported 00:08:50.113 LBA Status Info Alert Notices: Not Supported 00:08:50.113 EGE Aggregate Log Change Notices: Not Supported 00:08:50.113 Normal NVM Subsystem Shutdown event: Not Supported 00:08:50.113 Zone Descriptor Change Notices: Not Supported 00:08:50.113 Discovery Log Change Notices: Not Supported 00:08:50.113 Controller Attributes 00:08:50.113 128-bit Host Identifier: Not Supported 00:08:50.113 Non-Operational Permissive Mode: Not Supported 00:08:50.113 NVM Sets: Not Supported 00:08:50.113 Read Recovery Levels: Not Supported 00:08:50.113 Endurance Groups: Supported 00:08:50.113 Predictable Latency Mode: Not Supported 00:08:50.113 Traffic Based Keep ALive: Not Supported 00:08:50.113 Namespace Granularity: Not Supported 00:08:50.113 SQ Associations: Not Supported 00:08:50.113 UUID List: Not Supported 00:08:50.113 Multi-Domain Subsystem: Not Supported 00:08:50.113 Fixed Capacity Management: Not Supported 00:08:50.113 Variable Capacity Management: Not Supported 00:08:50.113 Delete Endurance Group: Not Supported 00:08:50.113 Delete NVM Set: Not Supported 00:08:50.113 Extended LBA Formats Supported: Supported 00:08:50.113 Flexible Data Placement Supported: Supported 00:08:50.113 00:08:50.113 Controller Memory Buffer Support 00:08:50.113 ================================ 00:08:50.113 Supported: No 00:08:50.113 00:08:50.113 Persistent Memory Region Support 00:08:50.113 ================================ 00:08:50.113 Supported: No 00:08:50.113 00:08:50.113 Admin Command Set Attributes 00:08:50.113 ============================ 00:08:50.113 Security Send/Receive: Not Supported 00:08:50.113 Format NVM: Supported 00:08:50.113 Firmware Activate/Download: Not Supported 00:08:50.113 Namespace Management: Supported 00:08:50.113 Device Self-Test: Not Supported 00:08:50.113 Directives: Supported 00:08:50.113 NVMe-MI: Not Supported 00:08:50.113 Virtualization Management: Not Supported 00:08:50.113 Doorbell Buffer Config: Supported 00:08:50.113 Get LBA Status Capability: Not Supported 00:08:50.114 Command & Feature Lockdown Capability: Not Supported 00:08:50.114 Abort Command Limit: 4 00:08:50.114 Async Event Request Limit: 4 00:08:50.114 Number of Firmware Slots: N/A 00:08:50.114 Firmware Slot 1 Read-Only: N/A 00:08:50.114 Firmware Activation Without Reset: N/A 00:08:50.114 Multiple Update Detection Support: N/A 00:08:50.114 Firmware Update Granularity: No Information Provided 00:08:50.114 Per-Namespace SMART Log: Yes 00:08:50.114 Asymmetric Namespace Access Log Page: Not Supported 00:08:50.114 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:50.114 Command Effects Log Page: Supported 00:08:50.114 Get Log Page Extended Data: Supported 00:08:50.114 Telemetry Log Pages: Not Supported 00:08:50.114 Persistent Event Log Pages: Not Supported 00:08:50.114 Supported Log Pages Log Page: May Support 00:08:50.114 Commands Supported & Effects Log Page: Not Supported 00:08:50.114 Feature Identifiers & Effects Log Page:May Support 00:08:50.114 NVMe-MI Commands & Effects Log Page: May Support 00:08:50.114 Data Area 4 for Telemetry Log: Not Supported 00:08:50.114 Error Log Page Entries Supported: 1 00:08:50.114 Keep Alive: Not Supported 00:08:50.114 00:08:50.114 NVM Command Set Attributes 00:08:50.114 ========================== 00:08:50.114 Submission Queue Entry Size 00:08:50.114 Max: 64 00:08:50.114 Min: 64 00:08:50.114 Completion Queue Entry Size 00:08:50.114 Max: 16 00:08:50.114 Min: 16 00:08:50.114 Number of Namespaces: 256 00:08:50.114 Compare Command: Supported 00:08:50.114 Write Uncorrectable Command: Not Supported 00:08:50.114 Dataset Management Command: Supported 00:08:50.114 Write Zeroes Command: Supported 00:08:50.114 Set Features Save Field: Supported 00:08:50.114 Reservations: Not Supported 00:08:50.114 Timestamp: Supported 00:08:50.114 Copy: Supported 00:08:50.114 Volatile Write Cache: Present 00:08:50.114 Atomic Write Unit (Normal): 1 00:08:50.114 Atomic Write Unit (PFail): 1 00:08:50.114 Atomic Compare & Write Unit: 1 00:08:50.114 Fused Compare & Write: Not Supported 00:08:50.114 Scatter-Gather List 00:08:50.114 SGL Command Set: Supported 00:08:50.114 SGL Keyed: Not Supported 00:08:50.114 SGL Bit Bucket Descriptor: Not Supported 00:08:50.114 SGL Metadata Pointer: Not Supported 00:08:50.114 Oversized SGL: Not Supported 00:08:50.114 SGL Metadata Address: Not Supported 00:08:50.114 SGL Offset: Not Supported 00:08:50.114 Transport SGL Data Block: Not Supported 00:08:50.114 Replay Protected Memory Block: Not Supported 00:08:50.114 00:08:50.114 Firmware Slot Information 00:08:50.114 ========================= 00:08:50.114 Active slot: 1 00:08:50.114 Slot 1 Firmware Revision: 1.0 00:08:50.114 00:08:50.114 00:08:50.114 Commands Supported and Effects 00:08:50.114 ============================== 00:08:50.114 Admin Commands 00:08:50.114 -------------- 00:08:50.114 Delete I/O Submission Queue (00h): Supported 00:08:50.114 Create I/O Submission Queue (01h): Supported 00:08:50.114 Get Log Page (02h): Supported 00:08:50.114 Delete I/O Completion Queue (04h): Supported 00:08:50.114 Create I/O Completion Queue (05h): Supported 00:08:50.114 Identify (06h): Supported 00:08:50.114 Abort (08h): Supported 00:08:50.114 Set Features (09h): Supported 00:08:50.114 Get Features (0Ah): Supported 00:08:50.114 Asynchronous Event Request (0Ch): Supported 00:08:50.114 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:50.114 Directive Send (19h): Supported 00:08:50.114 Directive Receive (1Ah): Supported 00:08:50.114 Virtualization Management (1Ch): Supported 00:08:50.114 Doorbell Buffer Config (7Ch): Supported 00:08:50.114 Format NVM (80h): Supported LBA-Change 00:08:50.114 I/O Commands 00:08:50.114 ------------ 00:08:50.114 Flush (00h): Supported LBA-Change 00:08:50.114 Write (01h): Supported LBA-Change 00:08:50.114 Read (02h): Supported 00:08:50.114 Compare (05h): Supported 00:08:50.114 Write Zeroes (08h): Supported LBA-Change 00:08:50.114 Dataset Management (09h): Supported LBA-Change 00:08:50.114 Unknown (0Ch): Supported 00:08:50.114 Unknown (12h): Supported 00:08:50.114 Copy (19h): Supported LBA-Change 00:08:50.114 Unknown (1Dh): Supported LBA-Change 00:08:50.114 00:08:50.114 Error Log 00:08:50.114 ========= 00:08:50.114 00:08:50.114 Arbitration 00:08:50.114 =========== 00:08:50.114 Arbitration Burst: no limit 00:08:50.114 00:08:50.114 Power Management 00:08:50.114 ================ 00:08:50.114 Number of Power States: 1 00:08:50.114 Current Power State: Power State #0 00:08:50.114 Power State #0: 00:08:50.114 Max Power: 25.00 W 00:08:50.114 Non-Operational State: Operational 00:08:50.114 Entry Latency: 16 microseconds 00:08:50.114 Exit Latency: 4 microseconds 00:08:50.114 Relative Read Throughput: 0 00:08:50.114 Relative Read Latency: 0 00:08:50.114 Relative Write Throughput: 0 00:08:50.114 Relative Write Latency: 0 00:08:50.114 Idle Power: Not Reported 00:08:50.114 Active Power: Not Reported 00:08:50.114 Non-Operational Permissive Mode: Not Supported 00:08:50.114 00:08:50.114 Health Information 00:08:50.114 ================== 00:08:50.114 Critical Warnings: 00:08:50.114 Available Spare Space: OK 00:08:50.114 Temperature: OK 00:08:50.114 Device Reliability: OK 00:08:50.114 Read Only: No 00:08:50.114 Volatile Memory Backup: OK 00:08:50.114 Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.114 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:50.114 Available Spare: 0% 00:08:50.114 Available Spare Threshold: 0% 00:08:50.114 Life Percentage Used: 0% 00:08:50.114 Data Units Read: 729 00:08:50.114 Data Units Written: 622 00:08:50.114 Host Read Commands: 33925 00:08:50.114 Host Write Commands: 32515 00:08:50.114 Controller Busy Time: 0 minutes 00:08:50.114 Power Cycles: 0 00:08:50.114 Power On Hours: 0 hours 00:08:50.114 Unsafe Shutdowns: 0 00:08:50.114 Unrecoverable Media Errors: 0 00:08:50.114 Lifetime Error Log Entries: 0 00:08:50.114 Warning Temperature Time: 0 minutes 00:08:50.114 Critical Temperature Time: 0 minutes 00:08:50.114 00:08:50.114 Number of Queues 00:08:50.114 ================ 00:08:50.114 Number of I/O Submission Queues: 64 00:08:50.114 Number of I/O Completion Queues: 64 00:08:50.114 00:08:50.114 ZNS Specific Controller Data 00:08:50.114 ============================ 00:08:50.114 Zone Append Size Limit: 0 00:08:50.114 00:08:50.114 00:08:50.114 Active Namespaces 00:08:50.114 ================= 00:08:50.114 Namespace ID:1 00:08:50.114 Error Recovery Timeout: Unlimited 00:08:50.114 Command Set Identifier: NVM (00h) 00:08:50.114 Deallocate: Supported 00:08:50.114 Deallocated/Unwritten Error: Supported 00:08:50.114 Deallocated Read Value: All 0x00 00:08:50.114 Deallocate in Write Zeroes: Not Supported 00:08:50.114 Deallocated Guard Field: 0xFFFF 00:08:50.114 Flush: Supported 00:08:50.114 Reservation: Not Supported 00:08:50.114 Namespace Sharing Capabilities: Multiple Controllers 00:08:50.114 Size (in LBAs): 262144 (1GiB) 00:08:50.114 Capacity (in LBAs): 262144 (1GiB) 00:08:50.114 Utilization (in LBAs): 262144 (1GiB) 00:08:50.114 Thin Provisioning: Not Supported 00:08:50.114 Per-NS Atomic Units: No 00:08:50.114 Maximum Single Source Range Length: 128 00:08:50.114 Maximum Copy Length: 128 00:08:50.114 Maximum Source Range Count: 128 00:08:50.114 NGUID/EUI64 Never Reused: No 00:08:50.114 Namespace Write Protected: No 00:08:50.114 Endurance group ID: 1 00:08:50.114 Number of LBA Formats: 8 00:08:50.114 Current LBA Format: LBA Format #04 00:08:50.114 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:50.114 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:50.114 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:50.114 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:50.114 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:50.114 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:50.114 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:50.114 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:50.114 00:08:50.114 Get Feature FDP: 00:08:50.114 ================ 00:08:50.114 Enabled: Yes 00:08:50.114 FDP configuration index: 0 00:08:50.114 00:08:50.114 FDP configurations log page 00:08:50.114 =========================== 00:08:50.115 Number of FDP configurations: 1 00:08:50.115 Version: 0 00:08:50.115 Size: 112 00:08:50.115 FDP Configuration Descriptor: 0 00:08:50.115 Descriptor Size: 96 00:08:50.115 Reclaim Group Identifier format: 2 00:08:50.115 FDP Volatile Write Cache: Not Present 00:08:50.115 FDP Configuration: Valid 00:08:50.115 Vendor Specific Size: 0 00:08:50.115 Number of Reclaim Groups: 2 00:08:50.115 Number of Recalim Unit Handles: 8 00:08:50.115 Max Placement Identifiers: 128 00:08:50.115 Number of Namespaces Suppprted: 256 00:08:50.115 Reclaim unit Nominal Size: 6000000 bytes 00:08:50.115 Estimated Reclaim Unit Time Limit: Not Reported 00:08:50.115 RUH Desc #000: RUH Type: Initially Isolated 00:08:50.115 RUH Desc #001: RUH Type: Initially Isolated 00:08:50.115 RUH Desc #002: RUH Type: Initially Isolated 00:08:50.115 RUH Desc #003: RUH Type: Initially Isolated 00:08:50.115 RUH Desc #004: RUH Type: Initially Isolated 00:08:50.115 RUH Desc #005: RUH Type: Initially Isolated 00:08:50.115 RUH Desc #006: RUH Type: Initially Isolated 00:08:50.115 RUH Desc #007: RUH Type: Initially Isolated 00:08:50.115 00:08:50.115 FDP reclaim unit handle usage log page 00:08:50.115 ====================================== 00:08:50.115 Number of Reclaim Unit Handles: 8 00:08:50.115 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:50.115 RUH Usage Desc #001: RUH Attributes: Unused 00:08:50.115 RUH Usage Desc #002: RUH Attributes: Unused 00:08:50.115 RUH Usage Desc #003: RUH Attributes: Unused 00:08:50.115 RUH Usage Desc #004: RUH Attributes: Unused 00:08:50.115 RUH Usage Desc #005: RUH Attributes: Unused 00:08:50.115 RUH Usage Desc #006: RUH Attributes: Unused 00:08:50.115 RUH Usage Desc #007: RUH Attributes: Unused 00:08:50.115 00:08:50.115 FDP statistics log page 00:08:50.115 ======================= 00:08:50.115 Host bytes with metadata written: 383033344 00:08:50.115 Medi[2024-07-25 13:04:42.067459] nvme_ctrlr.c:3608:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 68182 terminated unexpected 00:08:50.115 a bytes with metadata written: 383074304 00:08:50.115 Media bytes erased: 0 00:08:50.115 00:08:50.115 FDP events log page 00:08:50.115 =================== 00:08:50.115 Number of FDP events: 0 00:08:50.115 00:08:50.115 NVM Specific Namespace Data 00:08:50.115 =========================== 00:08:50.115 Logical Block Storage Tag Mask: 0 00:08:50.115 Protection Information Capabilities: 00:08:50.115 16b Guard Protection Information Storage Tag Support: No 00:08:50.115 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:50.115 Storage Tag Check Read Support: No 00:08:50.115 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.115 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.115 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.115 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.115 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.115 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.115 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.115 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.115 ===================================================== 00:08:50.115 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:50.115 ===================================================== 00:08:50.115 Controller Capabilities/Features 00:08:50.115 ================================ 00:08:50.115 Vendor ID: 1b36 00:08:50.115 Subsystem Vendor ID: 1af4 00:08:50.115 Serial Number: 12342 00:08:50.115 Model Number: QEMU NVMe Ctrl 00:08:50.115 Firmware Version: 8.0.0 00:08:50.115 Recommended Arb Burst: 6 00:08:50.115 IEEE OUI Identifier: 00 54 52 00:08:50.115 Multi-path I/O 00:08:50.115 May have multiple subsystem ports: No 00:08:50.115 May have multiple controllers: No 00:08:50.115 Associated with SR-IOV VF: No 00:08:50.115 Max Data Transfer Size: 524288 00:08:50.115 Max Number of Namespaces: 256 00:08:50.115 Max Number of I/O Queues: 64 00:08:50.115 NVMe Specification Version (VS): 1.4 00:08:50.115 NVMe Specification Version (Identify): 1.4 00:08:50.115 Maximum Queue Entries: 2048 00:08:50.115 Contiguous Queues Required: Yes 00:08:50.115 Arbitration Mechanisms Supported 00:08:50.115 Weighted Round Robin: Not Supported 00:08:50.115 Vendor Specific: Not Supported 00:08:50.115 Reset Timeout: 7500 ms 00:08:50.115 Doorbell Stride: 4 bytes 00:08:50.115 NVM Subsystem Reset: Not Supported 00:08:50.115 Command Sets Supported 00:08:50.115 NVM Command Set: Supported 00:08:50.115 Boot Partition: Not Supported 00:08:50.115 Memory Page Size Minimum: 4096 bytes 00:08:50.115 Memory Page Size Maximum: 65536 bytes 00:08:50.115 Persistent Memory Region: Not Supported 00:08:50.115 Optional Asynchronous Events Supported 00:08:50.115 Namespace Attribute Notices: Supported 00:08:50.115 Firmware Activation Notices: Not Supported 00:08:50.115 ANA Change Notices: Not Supported 00:08:50.115 PLE Aggregate Log Change Notices: Not Supported 00:08:50.115 LBA Status Info Alert Notices: Not Supported 00:08:50.115 EGE Aggregate Log Change Notices: Not Supported 00:08:50.115 Normal NVM Subsystem Shutdown event: Not Supported 00:08:50.115 Zone Descriptor Change Notices: Not Supported 00:08:50.115 Discovery Log Change Notices: Not Supported 00:08:50.115 Controller Attributes 00:08:50.115 128-bit Host Identifier: Not Supported 00:08:50.115 Non-Operational Permissive Mode: Not Supported 00:08:50.115 NVM Sets: Not Supported 00:08:50.115 Read Recovery Levels: Not Supported 00:08:50.115 Endurance Groups: Not Supported 00:08:50.115 Predictable Latency Mode: Not Supported 00:08:50.115 Traffic Based Keep ALive: Not Supported 00:08:50.115 Namespace Granularity: Not Supported 00:08:50.115 SQ Associations: Not Supported 00:08:50.115 UUID List: Not Supported 00:08:50.115 Multi-Domain Subsystem: Not Supported 00:08:50.115 Fixed Capacity Management: Not Supported 00:08:50.115 Variable Capacity Management: Not Supported 00:08:50.115 Delete Endurance Group: Not Supported 00:08:50.115 Delete NVM Set: Not Supported 00:08:50.115 Extended LBA Formats Supported: Supported 00:08:50.115 Flexible Data Placement Supported: Not Supported 00:08:50.115 00:08:50.115 Controller Memory Buffer Support 00:08:50.115 ================================ 00:08:50.115 Supported: No 00:08:50.115 00:08:50.115 Persistent Memory Region Support 00:08:50.115 ================================ 00:08:50.115 Supported: No 00:08:50.115 00:08:50.115 Admin Command Set Attributes 00:08:50.115 ============================ 00:08:50.115 Security Send/Receive: Not Supported 00:08:50.115 Format NVM: Supported 00:08:50.115 Firmware Activate/Download: Not Supported 00:08:50.115 Namespace Management: Supported 00:08:50.115 Device Self-Test: Not Supported 00:08:50.115 Directives: Supported 00:08:50.115 NVMe-MI: Not Supported 00:08:50.115 Virtualization Management: Not Supported 00:08:50.115 Doorbell Buffer Config: Supported 00:08:50.115 Get LBA Status Capability: Not Supported 00:08:50.115 Command & Feature Lockdown Capability: Not Supported 00:08:50.116 Abort Command Limit: 4 00:08:50.116 Async Event Request Limit: 4 00:08:50.116 Number of Firmware Slots: N/A 00:08:50.116 Firmware Slot 1 Read-Only: N/A 00:08:50.116 Firmware Activation Without Reset: N/A 00:08:50.116 Multiple Update Detection Support: N/A 00:08:50.116 Firmware Update Granularity: No Information Provided 00:08:50.116 Per-Namespace SMART Log: Yes 00:08:50.116 Asymmetric Namespace Access Log Page: Not Supported 00:08:50.116 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:50.116 Command Effects Log Page: Supported 00:08:50.116 Get Log Page Extended Data: Supported 00:08:50.116 Telemetry Log Pages: Not Supported 00:08:50.116 Persistent Event Log Pages: Not Supported 00:08:50.116 Supported Log Pages Log Page: May Support 00:08:50.116 Commands Supported & Effects Log Page: Not Supported 00:08:50.116 Feature Identifiers & Effects Log Page:May Support 00:08:50.116 NVMe-MI Commands & Effects Log Page: May Support 00:08:50.116 Data Area 4 for Telemetry Log: Not Supported 00:08:50.116 Error Log Page Entries Supported: 1 00:08:50.116 Keep Alive: Not Supported 00:08:50.116 00:08:50.116 NVM Command Set Attributes 00:08:50.116 ========================== 00:08:50.116 Submission Queue Entry Size 00:08:50.116 Max: 64 00:08:50.116 Min: 64 00:08:50.116 Completion Queue Entry Size 00:08:50.116 Max: 16 00:08:50.116 Min: 16 00:08:50.116 Number of Namespaces: 256 00:08:50.116 Compare Command: Supported 00:08:50.116 Write Uncorrectable Command: Not Supported 00:08:50.116 Dataset Management Command: Supported 00:08:50.116 Write Zeroes Command: Supported 00:08:50.116 Set Features Save Field: Supported 00:08:50.116 Reservations: Not Supported 00:08:50.116 Timestamp: Supported 00:08:50.116 Copy: Supported 00:08:50.116 Volatile Write Cache: Present 00:08:50.116 Atomic Write Unit (Normal): 1 00:08:50.116 Atomic Write Unit (PFail): 1 00:08:50.116 Atomic Compare & Write Unit: 1 00:08:50.116 Fused Compare & Write: Not Supported 00:08:50.116 Scatter-Gather List 00:08:50.116 SGL Command Set: Supported 00:08:50.116 SGL Keyed: Not Supported 00:08:50.116 SGL Bit Bucket Descriptor: Not Supported 00:08:50.116 SGL Metadata Pointer: Not Supported 00:08:50.116 Oversized SGL: Not Supported 00:08:50.116 SGL Metadata Address: Not Supported 00:08:50.116 SGL Offset: Not Supported 00:08:50.116 Transport SGL Data Block: Not Supported 00:08:50.116 Replay Protected Memory Block: Not Supported 00:08:50.116 00:08:50.116 Firmware Slot Information 00:08:50.116 ========================= 00:08:50.116 Active slot: 1 00:08:50.116 Slot 1 Firmware Revision: 1.0 00:08:50.116 00:08:50.116 00:08:50.116 Commands Supported and Effects 00:08:50.116 ============================== 00:08:50.116 Admin Commands 00:08:50.116 -------------- 00:08:50.116 Delete I/O Submission Queue (00h): Supported 00:08:50.116 Create I/O Submission Queue (01h): Supported 00:08:50.116 Get Log Page (02h): Supported 00:08:50.116 Delete I/O Completion Queue (04h): Supported 00:08:50.116 Create I/O Completion Queue (05h): Supported 00:08:50.116 Identify (06h): Supported 00:08:50.116 Abort (08h): Supported 00:08:50.116 Set Features (09h): Supported 00:08:50.116 Get Features (0Ah): Supported 00:08:50.116 Asynchronous Event Request (0Ch): Supported 00:08:50.116 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:50.116 Directive Send (19h): Supported 00:08:50.116 Directive Receive (1Ah): Supported 00:08:50.116 Virtualization Management (1Ch): Supported 00:08:50.116 Doorbell Buffer Config (7Ch): Supported 00:08:50.116 Format NVM (80h): Supported LBA-Change 00:08:50.116 I/O Commands 00:08:50.116 ------------ 00:08:50.116 Flush (00h): Supported LBA-Change 00:08:50.116 Write (01h): Supported LBA-Change 00:08:50.116 Read (02h): Supported 00:08:50.116 Compare (05h): Supported 00:08:50.116 Write Zeroes (08h): Supported LBA-Change 00:08:50.116 Dataset Management (09h): Supported LBA-Change 00:08:50.116 Unknown (0Ch): Supported 00:08:50.116 Unknown (12h): Supported 00:08:50.116 Copy (19h): Supported LBA-Change 00:08:50.116 Unknown (1Dh): Supported LBA-Change 00:08:50.116 00:08:50.116 Error Log 00:08:50.116 ========= 00:08:50.116 00:08:50.116 Arbitration 00:08:50.116 =========== 00:08:50.116 Arbitration Burst: no limit 00:08:50.116 00:08:50.116 Power Management 00:08:50.116 ================ 00:08:50.116 Number of Power States: 1 00:08:50.116 Current Power State: Power State #0 00:08:50.116 Power State #0: 00:08:50.116 Max Power: 25.00 W 00:08:50.116 Non-Operational State: Operational 00:08:50.116 Entry Latency: 16 microseconds 00:08:50.116 Exit Latency: 4 microseconds 00:08:50.116 Relative Read Throughput: 0 00:08:50.116 Relative Read Latency: 0 00:08:50.116 Relative Write Throughput: 0 00:08:50.116 Relative Write Latency: 0 00:08:50.116 Idle Power: Not Reported 00:08:50.116 Active Power: Not Reported 00:08:50.116 Non-Operational Permissive Mode: Not Supported 00:08:50.116 00:08:50.116 Health Information 00:08:50.116 ================== 00:08:50.116 Critical Warnings: 00:08:50.116 Available Spare Space: OK 00:08:50.116 Temperature: OK 00:08:50.116 Device Reliability: OK 00:08:50.116 Read Only: No 00:08:50.116 Volatile Memory Backup: OK 00:08:50.116 Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.116 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:50.116 Available Spare: 0% 00:08:50.116 Available Spare Threshold: 0% 00:08:50.116 Life Percentage Used: 0% 00:08:50.116 Data Units Read: 2165 00:08:50.116 Data Units Written: 1845 00:08:50.116 Host Read Commands: 101376 00:08:50.116 Host Write Commands: 97146 00:08:50.116 Controller Busy Time: 0 minutes 00:08:50.116 Power Cycles: 0 00:08:50.116 Power On Hours: 0 hours 00:08:50.116 Unsafe Shutdowns: 0 00:08:50.116 Unrecoverable Media Errors: 0 00:08:50.116 Lifetime Error Log Entries: 0 00:08:50.116 Warning Temperature Time: 0 minutes 00:08:50.116 Critical Temperature Time: 0 minutes 00:08:50.116 00:08:50.116 Number of Queues 00:08:50.116 ================ 00:08:50.116 Number of I/O Submission Queues: 64 00:08:50.116 Number of I/O Completion Queues: 64 00:08:50.116 00:08:50.116 ZNS Specific Controller Data 00:08:50.116 ============================ 00:08:50.116 Zone Append Size Limit: 0 00:08:50.116 00:08:50.116 00:08:50.116 Active Namespaces 00:08:50.116 ================= 00:08:50.116 Namespace ID:1 00:08:50.116 Error Recovery Timeout: Unlimited 00:08:50.116 Command Set Identifier: NVM (00h) 00:08:50.116 Deallocate: Supported 00:08:50.116 Deallocated/Unwritten Error: Supported 00:08:50.116 Deallocated Read Value: All 0x00 00:08:50.116 Deallocate in Write Zeroes: Not Supported 00:08:50.116 Deallocated Guard Field: 0xFFFF 00:08:50.116 Flush: Supported 00:08:50.116 Reservation: Not Supported 00:08:50.116 Namespace Sharing Capabilities: Private 00:08:50.116 Size (in LBAs): 1048576 (4GiB) 00:08:50.116 Capacity (in LBAs): 1048576 (4GiB) 00:08:50.116 Utilization (in LBAs): 1048576 (4GiB) 00:08:50.116 Thin Provisioning: Not Supported 00:08:50.116 Per-NS Atomic Units: No 00:08:50.116 Maximum Single Source Range Length: 128 00:08:50.116 Maximum Copy Length: 128 00:08:50.116 Maximum Source Range Count: 128 00:08:50.116 NGUID/EUI64 Never Reused: No 00:08:50.116 Namespace Write Protected: No 00:08:50.116 Number of LBA Formats: 8 00:08:50.116 Current LBA Format: LBA Format #04 00:08:50.116 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:50.116 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:50.116 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:50.116 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:50.116 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:50.116 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:50.116 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:50.116 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:50.116 00:08:50.116 NVM Specific Namespace Data 00:08:50.116 =========================== 00:08:50.116 Logical Block Storage Tag Mask: 0 00:08:50.116 Protection Information Capabilities: 00:08:50.116 16b Guard Protection Information Storage Tag Support: No 00:08:50.116 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:50.116 Storage Tag Check Read Support: No 00:08:50.116 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.116 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.116 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.116 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.116 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.116 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.117 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.117 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.117 Namespace ID:2 00:08:50.117 Error Recovery Timeout: Unlimited 00:08:50.117 Command Set Identifier: NVM (00h) 00:08:50.117 Deallocate: Supported 00:08:50.117 Deallocated/Unwritten Error: Supported 00:08:50.117 Deallocated Read Value: All 0x00 00:08:50.117 Deallocate in Write Zeroes: Not Supported 00:08:50.117 Deallocated Guard Field: 0xFFFF 00:08:50.117 Flush: Supported 00:08:50.117 Reservation: Not Supported 00:08:50.117 Namespace Sharing Capabilities: Private 00:08:50.117 Size (in LBAs): 1048576 (4GiB) 00:08:50.117 Capacity (in LBAs): 1048576 (4GiB) 00:08:50.117 Utilization (in LBAs): 1048576 (4GiB) 00:08:50.117 Thin Provisioning: Not Supported 00:08:50.117 Per-NS Atomic Units: No 00:08:50.117 Maximum Single Source Range Length: 128 00:08:50.117 Maximum Copy Length: 128 00:08:50.117 Maximum Source Range Count: 128 00:08:50.117 NGUID/EUI64 Never Reused: No 00:08:50.117 Namespace Write Protected: No 00:08:50.117 Number of LBA Formats: 8 00:08:50.117 Current LBA Format: LBA Format #04 00:08:50.117 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:50.117 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:50.117 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:50.117 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:50.117 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:50.117 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:50.117 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:50.117 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:50.117 00:08:50.117 NVM Specific Namespace Data 00:08:50.117 =========================== 00:08:50.117 Logical Block Storage Tag Mask: 0 00:08:50.117 Protection Information Capabilities: 00:08:50.117 16b Guard Protection Information Storage Tag Support: No 00:08:50.117 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:50.117 Storage Tag Check Read Support: No 00:08:50.117 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.117 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.117 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.117 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.117 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.117 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.117 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.117 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.117 Namespace ID:3 00:08:50.117 Error Recovery Timeout: Unlimited 00:08:50.117 Command Set Identifier: NVM (00h) 00:08:50.117 Deallocate: Supported 00:08:50.117 Deallocated/Unwritten Error: Supported 00:08:50.117 Deallocated Read Value: All 0x00 00:08:50.117 Deallocate in Write Zeroes: Not Supported 00:08:50.117 Deallocated Guard Field: 0xFFFF 00:08:50.117 Flush: Supported 00:08:50.117 Reservation: Not Supported 00:08:50.117 Namespace Sharing Capabilities: Private 00:08:50.117 Size (in LBAs): 1048576 (4GiB) 00:08:50.117 Capacity (in LBAs): 1048576 (4GiB) 00:08:50.117 Utilization (in LBAs): 1048576 (4GiB) 00:08:50.117 Thin Provisioning: Not Supported 00:08:50.117 Per-NS Atomic Units: No 00:08:50.117 Maximum Single Source Range Length: 128 00:08:50.117 Maximum Copy Length: 128 00:08:50.117 Maximum Source Range Count: 128 00:08:50.117 NGUID/EUI64 Never Reused: No 00:08:50.117 Namespace Write Protected: No 00:08:50.117 Number of LBA Formats: 8 00:08:50.117 Current LBA Format: LBA Format #04 00:08:50.117 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:50.117 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:50.117 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:50.117 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:50.117 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:50.117 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:50.117 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:50.117 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:50.117 00:08:50.117 NVM Specific Namespace Data 00:08:50.117 =========================== 00:08:50.117 Logical Block Storage Tag Mask: 0 00:08:50.117 Protection Information Capabilities: 00:08:50.117 16b Guard Protection Information Storage Tag Support: No 00:08:50.117 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:50.117 Storage Tag Check Read Support: No 00:08:50.117 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.117 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.117 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.117 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.117 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.117 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.117 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.117 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.117 13:04:42 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:50.117 13:04:42 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:50.377 ===================================================== 00:08:50.377 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:50.377 ===================================================== 00:08:50.377 Controller Capabilities/Features 00:08:50.377 ================================ 00:08:50.377 Vendor ID: 1b36 00:08:50.377 Subsystem Vendor ID: 1af4 00:08:50.377 Serial Number: 12340 00:08:50.377 Model Number: QEMU NVMe Ctrl 00:08:50.377 Firmware Version: 8.0.0 00:08:50.377 Recommended Arb Burst: 6 00:08:50.377 IEEE OUI Identifier: 00 54 52 00:08:50.377 Multi-path I/O 00:08:50.377 May have multiple subsystem ports: No 00:08:50.377 May have multiple controllers: No 00:08:50.377 Associated with SR-IOV VF: No 00:08:50.377 Max Data Transfer Size: 524288 00:08:50.377 Max Number of Namespaces: 256 00:08:50.377 Max Number of I/O Queues: 64 00:08:50.377 NVMe Specification Version (VS): 1.4 00:08:50.377 NVMe Specification Version (Identify): 1.4 00:08:50.377 Maximum Queue Entries: 2048 00:08:50.377 Contiguous Queues Required: Yes 00:08:50.377 Arbitration Mechanisms Supported 00:08:50.377 Weighted Round Robin: Not Supported 00:08:50.377 Vendor Specific: Not Supported 00:08:50.377 Reset Timeout: 7500 ms 00:08:50.377 Doorbell Stride: 4 bytes 00:08:50.377 NVM Subsystem Reset: Not Supported 00:08:50.377 Command Sets Supported 00:08:50.377 NVM Command Set: Supported 00:08:50.377 Boot Partition: Not Supported 00:08:50.377 Memory Page Size Minimum: 4096 bytes 00:08:50.377 Memory Page Size Maximum: 65536 bytes 00:08:50.377 Persistent Memory Region: Not Supported 00:08:50.377 Optional Asynchronous Events Supported 00:08:50.377 Namespace Attribute Notices: Supported 00:08:50.377 Firmware Activation Notices: Not Supported 00:08:50.377 ANA Change Notices: Not Supported 00:08:50.377 PLE Aggregate Log Change Notices: Not Supported 00:08:50.377 LBA Status Info Alert Notices: Not Supported 00:08:50.377 EGE Aggregate Log Change Notices: Not Supported 00:08:50.377 Normal NVM Subsystem Shutdown event: Not Supported 00:08:50.377 Zone Descriptor Change Notices: Not Supported 00:08:50.377 Discovery Log Change Notices: Not Supported 00:08:50.377 Controller Attributes 00:08:50.377 128-bit Host Identifier: Not Supported 00:08:50.377 Non-Operational Permissive Mode: Not Supported 00:08:50.377 NVM Sets: Not Supported 00:08:50.377 Read Recovery Levels: Not Supported 00:08:50.377 Endurance Groups: Not Supported 00:08:50.377 Predictable Latency Mode: Not Supported 00:08:50.377 Traffic Based Keep ALive: Not Supported 00:08:50.377 Namespace Granularity: Not Supported 00:08:50.377 SQ Associations: Not Supported 00:08:50.377 UUID List: Not Supported 00:08:50.377 Multi-Domain Subsystem: Not Supported 00:08:50.377 Fixed Capacity Management: Not Supported 00:08:50.377 Variable Capacity Management: Not Supported 00:08:50.377 Delete Endurance Group: Not Supported 00:08:50.377 Delete NVM Set: Not Supported 00:08:50.377 Extended LBA Formats Supported: Supported 00:08:50.377 Flexible Data Placement Supported: Not Supported 00:08:50.377 00:08:50.377 Controller Memory Buffer Support 00:08:50.377 ================================ 00:08:50.377 Supported: No 00:08:50.377 00:08:50.377 Persistent Memory Region Support 00:08:50.377 ================================ 00:08:50.377 Supported: No 00:08:50.377 00:08:50.377 Admin Command Set Attributes 00:08:50.377 ============================ 00:08:50.377 Security Send/Receive: Not Supported 00:08:50.377 Format NVM: Supported 00:08:50.377 Firmware Activate/Download: Not Supported 00:08:50.377 Namespace Management: Supported 00:08:50.377 Device Self-Test: Not Supported 00:08:50.377 Directives: Supported 00:08:50.377 NVMe-MI: Not Supported 00:08:50.377 Virtualization Management: Not Supported 00:08:50.377 Doorbell Buffer Config: Supported 00:08:50.377 Get LBA Status Capability: Not Supported 00:08:50.377 Command & Feature Lockdown Capability: Not Supported 00:08:50.377 Abort Command Limit: 4 00:08:50.377 Async Event Request Limit: 4 00:08:50.377 Number of Firmware Slots: N/A 00:08:50.377 Firmware Slot 1 Read-Only: N/A 00:08:50.377 Firmware Activation Without Reset: N/A 00:08:50.377 Multiple Update Detection Support: N/A 00:08:50.377 Firmware Update Granularity: No Information Provided 00:08:50.377 Per-Namespace SMART Log: Yes 00:08:50.377 Asymmetric Namespace Access Log Page: Not Supported 00:08:50.377 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:50.377 Command Effects Log Page: Supported 00:08:50.377 Get Log Page Extended Data: Supported 00:08:50.377 Telemetry Log Pages: Not Supported 00:08:50.377 Persistent Event Log Pages: Not Supported 00:08:50.377 Supported Log Pages Log Page: May Support 00:08:50.377 Commands Supported & Effects Log Page: Not Supported 00:08:50.377 Feature Identifiers & Effects Log Page:May Support 00:08:50.377 NVMe-MI Commands & Effects Log Page: May Support 00:08:50.377 Data Area 4 for Telemetry Log: Not Supported 00:08:50.377 Error Log Page Entries Supported: 1 00:08:50.377 Keep Alive: Not Supported 00:08:50.377 00:08:50.377 NVM Command Set Attributes 00:08:50.377 ========================== 00:08:50.377 Submission Queue Entry Size 00:08:50.377 Max: 64 00:08:50.377 Min: 64 00:08:50.377 Completion Queue Entry Size 00:08:50.377 Max: 16 00:08:50.377 Min: 16 00:08:50.377 Number of Namespaces: 256 00:08:50.377 Compare Command: Supported 00:08:50.377 Write Uncorrectable Command: Not Supported 00:08:50.377 Dataset Management Command: Supported 00:08:50.377 Write Zeroes Command: Supported 00:08:50.377 Set Features Save Field: Supported 00:08:50.377 Reservations: Not Supported 00:08:50.377 Timestamp: Supported 00:08:50.377 Copy: Supported 00:08:50.377 Volatile Write Cache: Present 00:08:50.377 Atomic Write Unit (Normal): 1 00:08:50.377 Atomic Write Unit (PFail): 1 00:08:50.377 Atomic Compare & Write Unit: 1 00:08:50.377 Fused Compare & Write: Not Supported 00:08:50.377 Scatter-Gather List 00:08:50.377 SGL Command Set: Supported 00:08:50.377 SGL Keyed: Not Supported 00:08:50.377 SGL Bit Bucket Descriptor: Not Supported 00:08:50.377 SGL Metadata Pointer: Not Supported 00:08:50.377 Oversized SGL: Not Supported 00:08:50.377 SGL Metadata Address: Not Supported 00:08:50.377 SGL Offset: Not Supported 00:08:50.377 Transport SGL Data Block: Not Supported 00:08:50.377 Replay Protected Memory Block: Not Supported 00:08:50.377 00:08:50.377 Firmware Slot Information 00:08:50.377 ========================= 00:08:50.377 Active slot: 1 00:08:50.377 Slot 1 Firmware Revision: 1.0 00:08:50.377 00:08:50.377 00:08:50.377 Commands Supported and Effects 00:08:50.377 ============================== 00:08:50.377 Admin Commands 00:08:50.377 -------------- 00:08:50.377 Delete I/O Submission Queue (00h): Supported 00:08:50.377 Create I/O Submission Queue (01h): Supported 00:08:50.377 Get Log Page (02h): Supported 00:08:50.377 Delete I/O Completion Queue (04h): Supported 00:08:50.377 Create I/O Completion Queue (05h): Supported 00:08:50.377 Identify (06h): Supported 00:08:50.377 Abort (08h): Supported 00:08:50.377 Set Features (09h): Supported 00:08:50.377 Get Features (0Ah): Supported 00:08:50.377 Asynchronous Event Request (0Ch): Supported 00:08:50.377 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:50.377 Directive Send (19h): Supported 00:08:50.377 Directive Receive (1Ah): Supported 00:08:50.377 Virtualization Management (1Ch): Supported 00:08:50.377 Doorbell Buffer Config (7Ch): Supported 00:08:50.377 Format NVM (80h): Supported LBA-Change 00:08:50.377 I/O Commands 00:08:50.377 ------------ 00:08:50.377 Flush (00h): Supported LBA-Change 00:08:50.377 Write (01h): Supported LBA-Change 00:08:50.377 Read (02h): Supported 00:08:50.377 Compare (05h): Supported 00:08:50.377 Write Zeroes (08h): Supported LBA-Change 00:08:50.377 Dataset Management (09h): Supported LBA-Change 00:08:50.377 Unknown (0Ch): Supported 00:08:50.377 Unknown (12h): Supported 00:08:50.377 Copy (19h): Supported LBA-Change 00:08:50.377 Unknown (1Dh): Supported LBA-Change 00:08:50.377 00:08:50.377 Error Log 00:08:50.377 ========= 00:08:50.377 00:08:50.377 Arbitration 00:08:50.377 =========== 00:08:50.377 Arbitration Burst: no limit 00:08:50.377 00:08:50.377 Power Management 00:08:50.377 ================ 00:08:50.377 Number of Power States: 1 00:08:50.377 Current Power State: Power State #0 00:08:50.377 Power State #0: 00:08:50.377 Max Power: 25.00 W 00:08:50.377 Non-Operational State: Operational 00:08:50.378 Entry Latency: 16 microseconds 00:08:50.378 Exit Latency: 4 microseconds 00:08:50.378 Relative Read Throughput: 0 00:08:50.378 Relative Read Latency: 0 00:08:50.378 Relative Write Throughput: 0 00:08:50.378 Relative Write Latency: 0 00:08:50.378 Idle Power: Not Reported 00:08:50.378 Active Power: Not Reported 00:08:50.378 Non-Operational Permissive Mode: Not Supported 00:08:50.378 00:08:50.378 Health Information 00:08:50.378 ================== 00:08:50.378 Critical Warnings: 00:08:50.378 Available Spare Space: OK 00:08:50.378 Temperature: OK 00:08:50.378 Device Reliability: OK 00:08:50.378 Read Only: No 00:08:50.378 Volatile Memory Backup: OK 00:08:50.378 Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.378 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:50.378 Available Spare: 0% 00:08:50.378 Available Spare Threshold: 0% 00:08:50.378 Life Percentage Used: 0% 00:08:50.378 Data Units Read: 666 00:08:50.378 Data Units Written: 557 00:08:50.378 Host Read Commands: 33042 00:08:50.378 Host Write Commands: 32080 00:08:50.378 Controller Busy Time: 0 minutes 00:08:50.378 Power Cycles: 0 00:08:50.378 Power On Hours: 0 hours 00:08:50.378 Unsafe Shutdowns: 0 00:08:50.378 Unrecoverable Media Errors: 0 00:08:50.378 Lifetime Error Log Entries: 0 00:08:50.378 Warning Temperature Time: 0 minutes 00:08:50.378 Critical Temperature Time: 0 minutes 00:08:50.378 00:08:50.378 Number of Queues 00:08:50.378 ================ 00:08:50.378 Number of I/O Submission Queues: 64 00:08:50.378 Number of I/O Completion Queues: 64 00:08:50.378 00:08:50.378 ZNS Specific Controller Data 00:08:50.378 ============================ 00:08:50.378 Zone Append Size Limit: 0 00:08:50.378 00:08:50.378 00:08:50.378 Active Namespaces 00:08:50.378 ================= 00:08:50.378 Namespace ID:1 00:08:50.378 Error Recovery Timeout: Unlimited 00:08:50.378 Command Set Identifier: NVM (00h) 00:08:50.378 Deallocate: Supported 00:08:50.378 Deallocated/Unwritten Error: Supported 00:08:50.378 Deallocated Read Value: All 0x00 00:08:50.378 Deallocate in Write Zeroes: Not Supported 00:08:50.378 Deallocated Guard Field: 0xFFFF 00:08:50.378 Flush: Supported 00:08:50.378 Reservation: Not Supported 00:08:50.378 Metadata Transferred as: Separate Metadata Buffer 00:08:50.378 Namespace Sharing Capabilities: Private 00:08:50.378 Size (in LBAs): 1548666 (5GiB) 00:08:50.378 Capacity (in LBAs): 1548666 (5GiB) 00:08:50.378 Utilization (in LBAs): 1548666 (5GiB) 00:08:50.378 Thin Provisioning: Not Supported 00:08:50.378 Per-NS Atomic Units: No 00:08:50.378 Maximum Single Source Range Length: 128 00:08:50.378 Maximum Copy Length: 128 00:08:50.378 Maximum Source Range Count: 128 00:08:50.378 NGUID/EUI64 Never Reused: No 00:08:50.378 Namespace Write Protected: No 00:08:50.378 Number of LBA Formats: 8 00:08:50.378 Current LBA Format: LBA Format #07 00:08:50.378 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:50.378 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:50.378 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:50.378 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:50.378 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:50.378 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:50.378 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:50.378 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:50.378 00:08:50.378 NVM Specific Namespace Data 00:08:50.378 =========================== 00:08:50.378 Logical Block Storage Tag Mask: 0 00:08:50.378 Protection Information Capabilities: 00:08:50.378 16b Guard Protection Information Storage Tag Support: No 00:08:50.378 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:50.378 Storage Tag Check Read Support: No 00:08:50.378 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.378 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.378 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.378 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.378 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.378 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.378 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.378 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.378 13:04:42 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:50.378 13:04:42 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:50.638 ===================================================== 00:08:50.638 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:50.638 ===================================================== 00:08:50.638 Controller Capabilities/Features 00:08:50.638 ================================ 00:08:50.638 Vendor ID: 1b36 00:08:50.638 Subsystem Vendor ID: 1af4 00:08:50.638 Serial Number: 12341 00:08:50.638 Model Number: QEMU NVMe Ctrl 00:08:50.638 Firmware Version: 8.0.0 00:08:50.638 Recommended Arb Burst: 6 00:08:50.638 IEEE OUI Identifier: 00 54 52 00:08:50.638 Multi-path I/O 00:08:50.638 May have multiple subsystem ports: No 00:08:50.638 May have multiple controllers: No 00:08:50.638 Associated with SR-IOV VF: No 00:08:50.638 Max Data Transfer Size: 524288 00:08:50.638 Max Number of Namespaces: 256 00:08:50.638 Max Number of I/O Queues: 64 00:08:50.638 NVMe Specification Version (VS): 1.4 00:08:50.638 NVMe Specification Version (Identify): 1.4 00:08:50.638 Maximum Queue Entries: 2048 00:08:50.638 Contiguous Queues Required: Yes 00:08:50.638 Arbitration Mechanisms Supported 00:08:50.638 Weighted Round Robin: Not Supported 00:08:50.638 Vendor Specific: Not Supported 00:08:50.638 Reset Timeout: 7500 ms 00:08:50.638 Doorbell Stride: 4 bytes 00:08:50.638 NVM Subsystem Reset: Not Supported 00:08:50.638 Command Sets Supported 00:08:50.638 NVM Command Set: Supported 00:08:50.638 Boot Partition: Not Supported 00:08:50.638 Memory Page Size Minimum: 4096 bytes 00:08:50.638 Memory Page Size Maximum: 65536 bytes 00:08:50.638 Persistent Memory Region: Not Supported 00:08:50.638 Optional Asynchronous Events Supported 00:08:50.638 Namespace Attribute Notices: Supported 00:08:50.638 Firmware Activation Notices: Not Supported 00:08:50.638 ANA Change Notices: Not Supported 00:08:50.638 PLE Aggregate Log Change Notices: Not Supported 00:08:50.638 LBA Status Info Alert Notices: Not Supported 00:08:50.638 EGE Aggregate Log Change Notices: Not Supported 00:08:50.638 Normal NVM Subsystem Shutdown event: Not Supported 00:08:50.638 Zone Descriptor Change Notices: Not Supported 00:08:50.638 Discovery Log Change Notices: Not Supported 00:08:50.638 Controller Attributes 00:08:50.638 128-bit Host Identifier: Not Supported 00:08:50.638 Non-Operational Permissive Mode: Not Supported 00:08:50.638 NVM Sets: Not Supported 00:08:50.638 Read Recovery Levels: Not Supported 00:08:50.638 Endurance Groups: Not Supported 00:08:50.638 Predictable Latency Mode: Not Supported 00:08:50.638 Traffic Based Keep ALive: Not Supported 00:08:50.638 Namespace Granularity: Not Supported 00:08:50.638 SQ Associations: Not Supported 00:08:50.638 UUID List: Not Supported 00:08:50.638 Multi-Domain Subsystem: Not Supported 00:08:50.638 Fixed Capacity Management: Not Supported 00:08:50.638 Variable Capacity Management: Not Supported 00:08:50.638 Delete Endurance Group: Not Supported 00:08:50.638 Delete NVM Set: Not Supported 00:08:50.638 Extended LBA Formats Supported: Supported 00:08:50.638 Flexible Data Placement Supported: Not Supported 00:08:50.638 00:08:50.638 Controller Memory Buffer Support 00:08:50.638 ================================ 00:08:50.638 Supported: No 00:08:50.638 00:08:50.638 Persistent Memory Region Support 00:08:50.638 ================================ 00:08:50.638 Supported: No 00:08:50.638 00:08:50.638 Admin Command Set Attributes 00:08:50.638 ============================ 00:08:50.638 Security Send/Receive: Not Supported 00:08:50.638 Format NVM: Supported 00:08:50.638 Firmware Activate/Download: Not Supported 00:08:50.638 Namespace Management: Supported 00:08:50.638 Device Self-Test: Not Supported 00:08:50.638 Directives: Supported 00:08:50.638 NVMe-MI: Not Supported 00:08:50.638 Virtualization Management: Not Supported 00:08:50.638 Doorbell Buffer Config: Supported 00:08:50.638 Get LBA Status Capability: Not Supported 00:08:50.638 Command & Feature Lockdown Capability: Not Supported 00:08:50.638 Abort Command Limit: 4 00:08:50.638 Async Event Request Limit: 4 00:08:50.638 Number of Firmware Slots: N/A 00:08:50.638 Firmware Slot 1 Read-Only: N/A 00:08:50.638 Firmware Activation Without Reset: N/A 00:08:50.638 Multiple Update Detection Support: N/A 00:08:50.638 Firmware Update Granularity: No Information Provided 00:08:50.638 Per-Namespace SMART Log: Yes 00:08:50.638 Asymmetric Namespace Access Log Page: Not Supported 00:08:50.638 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:50.638 Command Effects Log Page: Supported 00:08:50.638 Get Log Page Extended Data: Supported 00:08:50.638 Telemetry Log Pages: Not Supported 00:08:50.638 Persistent Event Log Pages: Not Supported 00:08:50.638 Supported Log Pages Log Page: May Support 00:08:50.638 Commands Supported & Effects Log Page: Not Supported 00:08:50.638 Feature Identifiers & Effects Log Page:May Support 00:08:50.638 NVMe-MI Commands & Effects Log Page: May Support 00:08:50.638 Data Area 4 for Telemetry Log: Not Supported 00:08:50.638 Error Log Page Entries Supported: 1 00:08:50.638 Keep Alive: Not Supported 00:08:50.638 00:08:50.638 NVM Command Set Attributes 00:08:50.638 ========================== 00:08:50.638 Submission Queue Entry Size 00:08:50.638 Max: 64 00:08:50.638 Min: 64 00:08:50.638 Completion Queue Entry Size 00:08:50.638 Max: 16 00:08:50.638 Min: 16 00:08:50.638 Number of Namespaces: 256 00:08:50.638 Compare Command: Supported 00:08:50.638 Write Uncorrectable Command: Not Supported 00:08:50.638 Dataset Management Command: Supported 00:08:50.638 Write Zeroes Command: Supported 00:08:50.638 Set Features Save Field: Supported 00:08:50.638 Reservations: Not Supported 00:08:50.638 Timestamp: Supported 00:08:50.638 Copy: Supported 00:08:50.638 Volatile Write Cache: Present 00:08:50.638 Atomic Write Unit (Normal): 1 00:08:50.638 Atomic Write Unit (PFail): 1 00:08:50.638 Atomic Compare & Write Unit: 1 00:08:50.638 Fused Compare & Write: Not Supported 00:08:50.638 Scatter-Gather List 00:08:50.638 SGL Command Set: Supported 00:08:50.638 SGL Keyed: Not Supported 00:08:50.638 SGL Bit Bucket Descriptor: Not Supported 00:08:50.638 SGL Metadata Pointer: Not Supported 00:08:50.638 Oversized SGL: Not Supported 00:08:50.638 SGL Metadata Address: Not Supported 00:08:50.638 SGL Offset: Not Supported 00:08:50.638 Transport SGL Data Block: Not Supported 00:08:50.638 Replay Protected Memory Block: Not Supported 00:08:50.638 00:08:50.638 Firmware Slot Information 00:08:50.638 ========================= 00:08:50.638 Active slot: 1 00:08:50.638 Slot 1 Firmware Revision: 1.0 00:08:50.638 00:08:50.638 00:08:50.638 Commands Supported and Effects 00:08:50.638 ============================== 00:08:50.638 Admin Commands 00:08:50.638 -------------- 00:08:50.638 Delete I/O Submission Queue (00h): Supported 00:08:50.638 Create I/O Submission Queue (01h): Supported 00:08:50.638 Get Log Page (02h): Supported 00:08:50.638 Delete I/O Completion Queue (04h): Supported 00:08:50.638 Create I/O Completion Queue (05h): Supported 00:08:50.638 Identify (06h): Supported 00:08:50.638 Abort (08h): Supported 00:08:50.639 Set Features (09h): Supported 00:08:50.639 Get Features (0Ah): Supported 00:08:50.639 Asynchronous Event Request (0Ch): Supported 00:08:50.639 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:50.639 Directive Send (19h): Supported 00:08:50.639 Directive Receive (1Ah): Supported 00:08:50.639 Virtualization Management (1Ch): Supported 00:08:50.639 Doorbell Buffer Config (7Ch): Supported 00:08:50.639 Format NVM (80h): Supported LBA-Change 00:08:50.639 I/O Commands 00:08:50.639 ------------ 00:08:50.639 Flush (00h): Supported LBA-Change 00:08:50.639 Write (01h): Supported LBA-Change 00:08:50.639 Read (02h): Supported 00:08:50.639 Compare (05h): Supported 00:08:50.639 Write Zeroes (08h): Supported LBA-Change 00:08:50.639 Dataset Management (09h): Supported LBA-Change 00:08:50.639 Unknown (0Ch): Supported 00:08:50.639 Unknown (12h): Supported 00:08:50.639 Copy (19h): Supported LBA-Change 00:08:50.639 Unknown (1Dh): Supported LBA-Change 00:08:50.639 00:08:50.639 Error Log 00:08:50.639 ========= 00:08:50.639 00:08:50.639 Arbitration 00:08:50.639 =========== 00:08:50.639 Arbitration Burst: no limit 00:08:50.639 00:08:50.639 Power Management 00:08:50.639 ================ 00:08:50.639 Number of Power States: 1 00:08:50.639 Current Power State: Power State #0 00:08:50.639 Power State #0: 00:08:50.639 Max Power: 25.00 W 00:08:50.639 Non-Operational State: Operational 00:08:50.639 Entry Latency: 16 microseconds 00:08:50.639 Exit Latency: 4 microseconds 00:08:50.639 Relative Read Throughput: 0 00:08:50.639 Relative Read Latency: 0 00:08:50.639 Relative Write Throughput: 0 00:08:50.639 Relative Write Latency: 0 00:08:50.639 Idle Power: Not Reported 00:08:50.639 Active Power: Not Reported 00:08:50.639 Non-Operational Permissive Mode: Not Supported 00:08:50.639 00:08:50.639 Health Information 00:08:50.639 ================== 00:08:50.639 Critical Warnings: 00:08:50.639 Available Spare Space: OK 00:08:50.639 Temperature: OK 00:08:50.639 Device Reliability: OK 00:08:50.639 Read Only: No 00:08:50.639 Volatile Memory Backup: OK 00:08:50.639 Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.639 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:50.639 Available Spare: 0% 00:08:50.639 Available Spare Threshold: 0% 00:08:50.639 Life Percentage Used: 0% 00:08:50.639 Data Units Read: 1068 00:08:50.639 Data Units Written: 859 00:08:50.639 Host Read Commands: 50119 00:08:50.639 Host Write Commands: 47274 00:08:50.639 Controller Busy Time: 0 minutes 00:08:50.639 Power Cycles: 0 00:08:50.639 Power On Hours: 0 hours 00:08:50.639 Unsafe Shutdowns: 0 00:08:50.639 Unrecoverable Media Errors: 0 00:08:50.639 Lifetime Error Log Entries: 0 00:08:50.639 Warning Temperature Time: 0 minutes 00:08:50.639 Critical Temperature Time: 0 minutes 00:08:50.639 00:08:50.639 Number of Queues 00:08:50.639 ================ 00:08:50.639 Number of I/O Submission Queues: 64 00:08:50.639 Number of I/O Completion Queues: 64 00:08:50.639 00:08:50.639 ZNS Specific Controller Data 00:08:50.639 ============================ 00:08:50.639 Zone Append Size Limit: 0 00:08:50.639 00:08:50.639 00:08:50.639 Active Namespaces 00:08:50.639 ================= 00:08:50.639 Namespace ID:1 00:08:50.639 Error Recovery Timeout: Unlimited 00:08:50.639 Command Set Identifier: NVM (00h) 00:08:50.639 Deallocate: Supported 00:08:50.639 Deallocated/Unwritten Error: Supported 00:08:50.639 Deallocated Read Value: All 0x00 00:08:50.639 Deallocate in Write Zeroes: Not Supported 00:08:50.639 Deallocated Guard Field: 0xFFFF 00:08:50.639 Flush: Supported 00:08:50.639 Reservation: Not Supported 00:08:50.639 Namespace Sharing Capabilities: Private 00:08:50.639 Size (in LBAs): 1310720 (5GiB) 00:08:50.639 Capacity (in LBAs): 1310720 (5GiB) 00:08:50.639 Utilization (in LBAs): 1310720 (5GiB) 00:08:50.639 Thin Provisioning: Not Supported 00:08:50.639 Per-NS Atomic Units: No 00:08:50.639 Maximum Single Source Range Length: 128 00:08:50.639 Maximum Copy Length: 128 00:08:50.639 Maximum Source Range Count: 128 00:08:50.639 NGUID/EUI64 Never Reused: No 00:08:50.639 Namespace Write Protected: No 00:08:50.639 Number of LBA Formats: 8 00:08:50.639 Current LBA Format: LBA Format #04 00:08:50.639 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:50.639 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:50.639 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:50.639 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:50.639 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:50.639 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:50.639 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:50.639 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:50.639 00:08:50.639 NVM Specific Namespace Data 00:08:50.639 =========================== 00:08:50.639 Logical Block Storage Tag Mask: 0 00:08:50.639 Protection Information Capabilities: 00:08:50.639 16b Guard Protection Information Storage Tag Support: No 00:08:50.639 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:50.639 Storage Tag Check Read Support: No 00:08:50.639 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.639 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.639 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.639 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.639 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.639 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.639 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.639 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.639 13:04:42 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:50.639 13:04:42 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:50.899 ===================================================== 00:08:50.899 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:50.899 ===================================================== 00:08:50.899 Controller Capabilities/Features 00:08:50.899 ================================ 00:08:50.899 Vendor ID: 1b36 00:08:50.899 Subsystem Vendor ID: 1af4 00:08:50.899 Serial Number: 12342 00:08:50.899 Model Number: QEMU NVMe Ctrl 00:08:50.899 Firmware Version: 8.0.0 00:08:50.899 Recommended Arb Burst: 6 00:08:50.899 IEEE OUI Identifier: 00 54 52 00:08:50.899 Multi-path I/O 00:08:50.899 May have multiple subsystem ports: No 00:08:50.899 May have multiple controllers: No 00:08:50.899 Associated with SR-IOV VF: No 00:08:50.899 Max Data Transfer Size: 524288 00:08:50.899 Max Number of Namespaces: 256 00:08:50.899 Max Number of I/O Queues: 64 00:08:50.899 NVMe Specification Version (VS): 1.4 00:08:50.899 NVMe Specification Version (Identify): 1.4 00:08:50.899 Maximum Queue Entries: 2048 00:08:50.899 Contiguous Queues Required: Yes 00:08:50.899 Arbitration Mechanisms Supported 00:08:50.899 Weighted Round Robin: Not Supported 00:08:50.899 Vendor Specific: Not Supported 00:08:50.899 Reset Timeout: 7500 ms 00:08:50.899 Doorbell Stride: 4 bytes 00:08:50.899 NVM Subsystem Reset: Not Supported 00:08:50.899 Command Sets Supported 00:08:50.899 NVM Command Set: Supported 00:08:50.899 Boot Partition: Not Supported 00:08:50.899 Memory Page Size Minimum: 4096 bytes 00:08:50.899 Memory Page Size Maximum: 65536 bytes 00:08:50.899 Persistent Memory Region: Not Supported 00:08:50.899 Optional Asynchronous Events Supported 00:08:50.899 Namespace Attribute Notices: Supported 00:08:50.899 Firmware Activation Notices: Not Supported 00:08:50.899 ANA Change Notices: Not Supported 00:08:50.899 PLE Aggregate Log Change Notices: Not Supported 00:08:50.899 LBA Status Info Alert Notices: Not Supported 00:08:50.899 EGE Aggregate Log Change Notices: Not Supported 00:08:50.899 Normal NVM Subsystem Shutdown event: Not Supported 00:08:50.899 Zone Descriptor Change Notices: Not Supported 00:08:50.899 Discovery Log Change Notices: Not Supported 00:08:50.899 Controller Attributes 00:08:50.899 128-bit Host Identifier: Not Supported 00:08:50.899 Non-Operational Permissive Mode: Not Supported 00:08:50.899 NVM Sets: Not Supported 00:08:50.899 Read Recovery Levels: Not Supported 00:08:50.899 Endurance Groups: Not Supported 00:08:50.899 Predictable Latency Mode: Not Supported 00:08:50.899 Traffic Based Keep ALive: Not Supported 00:08:50.899 Namespace Granularity: Not Supported 00:08:50.899 SQ Associations: Not Supported 00:08:50.899 UUID List: Not Supported 00:08:50.899 Multi-Domain Subsystem: Not Supported 00:08:50.899 Fixed Capacity Management: Not Supported 00:08:50.899 Variable Capacity Management: Not Supported 00:08:50.899 Delete Endurance Group: Not Supported 00:08:50.899 Delete NVM Set: Not Supported 00:08:50.899 Extended LBA Formats Supported: Supported 00:08:50.899 Flexible Data Placement Supported: Not Supported 00:08:50.899 00:08:50.899 Controller Memory Buffer Support 00:08:50.899 ================================ 00:08:50.899 Supported: No 00:08:50.899 00:08:50.899 Persistent Memory Region Support 00:08:50.899 ================================ 00:08:50.899 Supported: No 00:08:50.899 00:08:50.899 Admin Command Set Attributes 00:08:50.899 ============================ 00:08:50.899 Security Send/Receive: Not Supported 00:08:50.899 Format NVM: Supported 00:08:50.899 Firmware Activate/Download: Not Supported 00:08:50.899 Namespace Management: Supported 00:08:50.899 Device Self-Test: Not Supported 00:08:50.899 Directives: Supported 00:08:50.899 NVMe-MI: Not Supported 00:08:50.899 Virtualization Management: Not Supported 00:08:50.899 Doorbell Buffer Config: Supported 00:08:50.899 Get LBA Status Capability: Not Supported 00:08:50.899 Command & Feature Lockdown Capability: Not Supported 00:08:50.899 Abort Command Limit: 4 00:08:50.899 Async Event Request Limit: 4 00:08:50.899 Number of Firmware Slots: N/A 00:08:50.899 Firmware Slot 1 Read-Only: N/A 00:08:50.899 Firmware Activation Without Reset: N/A 00:08:50.899 Multiple Update Detection Support: N/A 00:08:50.899 Firmware Update Granularity: No Information Provided 00:08:50.899 Per-Namespace SMART Log: Yes 00:08:50.899 Asymmetric Namespace Access Log Page: Not Supported 00:08:50.899 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:50.899 Command Effects Log Page: Supported 00:08:50.899 Get Log Page Extended Data: Supported 00:08:50.899 Telemetry Log Pages: Not Supported 00:08:50.899 Persistent Event Log Pages: Not Supported 00:08:50.899 Supported Log Pages Log Page: May Support 00:08:50.899 Commands Supported & Effects Log Page: Not Supported 00:08:50.899 Feature Identifiers & Effects Log Page:May Support 00:08:50.899 NVMe-MI Commands & Effects Log Page: May Support 00:08:50.899 Data Area 4 for Telemetry Log: Not Supported 00:08:50.899 Error Log Page Entries Supported: 1 00:08:50.899 Keep Alive: Not Supported 00:08:50.899 00:08:50.899 NVM Command Set Attributes 00:08:50.899 ========================== 00:08:50.899 Submission Queue Entry Size 00:08:50.899 Max: 64 00:08:50.899 Min: 64 00:08:50.899 Completion Queue Entry Size 00:08:50.899 Max: 16 00:08:50.899 Min: 16 00:08:50.899 Number of Namespaces: 256 00:08:50.899 Compare Command: Supported 00:08:50.899 Write Uncorrectable Command: Not Supported 00:08:50.899 Dataset Management Command: Supported 00:08:50.899 Write Zeroes Command: Supported 00:08:50.899 Set Features Save Field: Supported 00:08:50.899 Reservations: Not Supported 00:08:50.899 Timestamp: Supported 00:08:50.899 Copy: Supported 00:08:50.899 Volatile Write Cache: Present 00:08:50.900 Atomic Write Unit (Normal): 1 00:08:50.900 Atomic Write Unit (PFail): 1 00:08:50.900 Atomic Compare & Write Unit: 1 00:08:50.900 Fused Compare & Write: Not Supported 00:08:50.900 Scatter-Gather List 00:08:50.900 SGL Command Set: Supported 00:08:50.900 SGL Keyed: Not Supported 00:08:50.900 SGL Bit Bucket Descriptor: Not Supported 00:08:50.900 SGL Metadata Pointer: Not Supported 00:08:50.900 Oversized SGL: Not Supported 00:08:50.900 SGL Metadata Address: Not Supported 00:08:50.900 SGL Offset: Not Supported 00:08:50.900 Transport SGL Data Block: Not Supported 00:08:50.900 Replay Protected Memory Block: Not Supported 00:08:50.900 00:08:50.900 Firmware Slot Information 00:08:50.900 ========================= 00:08:50.900 Active slot: 1 00:08:50.900 Slot 1 Firmware Revision: 1.0 00:08:50.900 00:08:50.900 00:08:50.900 Commands Supported and Effects 00:08:50.900 ============================== 00:08:50.900 Admin Commands 00:08:50.900 -------------- 00:08:50.900 Delete I/O Submission Queue (00h): Supported 00:08:50.900 Create I/O Submission Queue (01h): Supported 00:08:50.900 Get Log Page (02h): Supported 00:08:50.900 Delete I/O Completion Queue (04h): Supported 00:08:50.900 Create I/O Completion Queue (05h): Supported 00:08:50.900 Identify (06h): Supported 00:08:50.900 Abort (08h): Supported 00:08:50.900 Set Features (09h): Supported 00:08:50.900 Get Features (0Ah): Supported 00:08:50.900 Asynchronous Event Request (0Ch): Supported 00:08:50.900 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:50.900 Directive Send (19h): Supported 00:08:50.900 Directive Receive (1Ah): Supported 00:08:50.900 Virtualization Management (1Ch): Supported 00:08:50.900 Doorbell Buffer Config (7Ch): Supported 00:08:50.900 Format NVM (80h): Supported LBA-Change 00:08:50.900 I/O Commands 00:08:50.900 ------------ 00:08:50.900 Flush (00h): Supported LBA-Change 00:08:50.900 Write (01h): Supported LBA-Change 00:08:50.900 Read (02h): Supported 00:08:50.900 Compare (05h): Supported 00:08:50.900 Write Zeroes (08h): Supported LBA-Change 00:08:50.900 Dataset Management (09h): Supported LBA-Change 00:08:50.900 Unknown (0Ch): Supported 00:08:50.900 Unknown (12h): Supported 00:08:50.900 Copy (19h): Supported LBA-Change 00:08:50.900 Unknown (1Dh): Supported LBA-Change 00:08:50.900 00:08:50.900 Error Log 00:08:50.900 ========= 00:08:50.900 00:08:50.900 Arbitration 00:08:50.900 =========== 00:08:50.900 Arbitration Burst: no limit 00:08:50.900 00:08:50.900 Power Management 00:08:50.900 ================ 00:08:50.900 Number of Power States: 1 00:08:50.900 Current Power State: Power State #0 00:08:50.900 Power State #0: 00:08:50.900 Max Power: 25.00 W 00:08:50.900 Non-Operational State: Operational 00:08:50.900 Entry Latency: 16 microseconds 00:08:50.900 Exit Latency: 4 microseconds 00:08:50.900 Relative Read Throughput: 0 00:08:50.900 Relative Read Latency: 0 00:08:50.900 Relative Write Throughput: 0 00:08:50.900 Relative Write Latency: 0 00:08:50.900 Idle Power: Not Reported 00:08:50.900 Active Power: Not Reported 00:08:50.900 Non-Operational Permissive Mode: Not Supported 00:08:50.900 00:08:50.900 Health Information 00:08:50.900 ================== 00:08:50.900 Critical Warnings: 00:08:50.900 Available Spare Space: OK 00:08:50.900 Temperature: OK 00:08:50.900 Device Reliability: OK 00:08:50.900 Read Only: No 00:08:50.900 Volatile Memory Backup: OK 00:08:50.900 Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.900 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:50.900 Available Spare: 0% 00:08:50.900 Available Spare Threshold: 0% 00:08:50.900 Life Percentage Used: 0% 00:08:50.900 Data Units Read: 2165 00:08:50.900 Data Units Written: 1845 00:08:50.900 Host Read Commands: 101376 00:08:50.900 Host Write Commands: 97146 00:08:50.900 Controller Busy Time: 0 minutes 00:08:50.900 Power Cycles: 0 00:08:50.900 Power On Hours: 0 hours 00:08:50.900 Unsafe Shutdowns: 0 00:08:50.900 Unrecoverable Media Errors: 0 00:08:50.900 Lifetime Error Log Entries: 0 00:08:50.900 Warning Temperature Time: 0 minutes 00:08:50.900 Critical Temperature Time: 0 minutes 00:08:50.900 00:08:50.900 Number of Queues 00:08:50.900 ================ 00:08:50.900 Number of I/O Submission Queues: 64 00:08:50.900 Number of I/O Completion Queues: 64 00:08:50.900 00:08:50.900 ZNS Specific Controller Data 00:08:50.900 ============================ 00:08:50.900 Zone Append Size Limit: 0 00:08:50.900 00:08:50.900 00:08:50.900 Active Namespaces 00:08:50.900 ================= 00:08:50.900 Namespace ID:1 00:08:50.900 Error Recovery Timeout: Unlimited 00:08:50.900 Command Set Identifier: NVM (00h) 00:08:50.900 Deallocate: Supported 00:08:50.900 Deallocated/Unwritten Error: Supported 00:08:50.900 Deallocated Read Value: All 0x00 00:08:50.900 Deallocate in Write Zeroes: Not Supported 00:08:50.900 Deallocated Guard Field: 0xFFFF 00:08:50.900 Flush: Supported 00:08:50.900 Reservation: Not Supported 00:08:50.900 Namespace Sharing Capabilities: Private 00:08:50.900 Size (in LBAs): 1048576 (4GiB) 00:08:50.900 Capacity (in LBAs): 1048576 (4GiB) 00:08:50.900 Utilization (in LBAs): 1048576 (4GiB) 00:08:50.900 Thin Provisioning: Not Supported 00:08:50.900 Per-NS Atomic Units: No 00:08:50.900 Maximum Single Source Range Length: 128 00:08:50.900 Maximum Copy Length: 128 00:08:50.900 Maximum Source Range Count: 128 00:08:50.900 NGUID/EUI64 Never Reused: No 00:08:50.900 Namespace Write Protected: No 00:08:50.900 Number of LBA Formats: 8 00:08:50.900 Current LBA Format: LBA Format #04 00:08:50.900 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:50.900 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:50.900 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:50.900 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:50.900 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:50.900 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:50.900 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:50.900 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:50.900 00:08:50.900 NVM Specific Namespace Data 00:08:50.900 =========================== 00:08:50.900 Logical Block Storage Tag Mask: 0 00:08:50.900 Protection Information Capabilities: 00:08:50.900 16b Guard Protection Information Storage Tag Support: No 00:08:50.900 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:50.900 Storage Tag Check Read Support: No 00:08:50.900 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.900 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.900 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.900 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.900 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.900 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.900 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.900 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.900 Namespace ID:2 00:08:50.900 Error Recovery Timeout: Unlimited 00:08:50.900 Command Set Identifier: NVM (00h) 00:08:50.900 Deallocate: Supported 00:08:50.900 Deallocated/Unwritten Error: Supported 00:08:50.900 Deallocated Read Value: All 0x00 00:08:50.900 Deallocate in Write Zeroes: Not Supported 00:08:50.900 Deallocated Guard Field: 0xFFFF 00:08:50.900 Flush: Supported 00:08:50.900 Reservation: Not Supported 00:08:50.900 Namespace Sharing Capabilities: Private 00:08:50.900 Size (in LBAs): 1048576 (4GiB) 00:08:50.900 Capacity (in LBAs): 1048576 (4GiB) 00:08:50.900 Utilization (in LBAs): 1048576 (4GiB) 00:08:50.900 Thin Provisioning: Not Supported 00:08:50.901 Per-NS Atomic Units: No 00:08:50.901 Maximum Single Source Range Length: 128 00:08:50.901 Maximum Copy Length: 128 00:08:50.901 Maximum Source Range Count: 128 00:08:50.901 NGUID/EUI64 Never Reused: No 00:08:50.901 Namespace Write Protected: No 00:08:50.901 Number of LBA Formats: 8 00:08:50.901 Current LBA Format: LBA Format #04 00:08:50.901 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:50.901 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:50.901 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:50.901 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:50.901 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:50.901 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:50.901 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:50.901 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:50.901 00:08:50.901 NVM Specific Namespace Data 00:08:50.901 =========================== 00:08:50.901 Logical Block Storage Tag Mask: 0 00:08:50.901 Protection Information Capabilities: 00:08:50.901 16b Guard Protection Information Storage Tag Support: No 00:08:50.901 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:50.901 Storage Tag Check Read Support: No 00:08:50.901 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.901 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.901 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.901 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.901 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.901 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.901 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.901 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.901 Namespace ID:3 00:08:50.901 Error Recovery Timeout: Unlimited 00:08:50.901 Command Set Identifier: NVM (00h) 00:08:50.901 Deallocate: Supported 00:08:50.901 Deallocated/Unwritten Error: Supported 00:08:50.901 Deallocated Read Value: All 0x00 00:08:50.901 Deallocate in Write Zeroes: Not Supported 00:08:50.901 Deallocated Guard Field: 0xFFFF 00:08:50.901 Flush: Supported 00:08:50.901 Reservation: Not Supported 00:08:50.901 Namespace Sharing Capabilities: Private 00:08:50.901 Size (in LBAs): 1048576 (4GiB) 00:08:50.901 Capacity (in LBAs): 1048576 (4GiB) 00:08:50.901 Utilization (in LBAs): 1048576 (4GiB) 00:08:50.901 Thin Provisioning: Not Supported 00:08:50.901 Per-NS Atomic Units: No 00:08:50.901 Maximum Single Source Range Length: 128 00:08:50.901 Maximum Copy Length: 128 00:08:50.901 Maximum Source Range Count: 128 00:08:50.901 NGUID/EUI64 Never Reused: No 00:08:50.901 Namespace Write Protected: No 00:08:50.901 Number of LBA Formats: 8 00:08:50.901 Current LBA Format: LBA Format #04 00:08:50.901 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:50.901 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:50.901 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:50.901 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:50.901 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:50.901 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:50.901 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:50.901 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:50.901 00:08:50.901 NVM Specific Namespace Data 00:08:50.901 =========================== 00:08:50.901 Logical Block Storage Tag Mask: 0 00:08:50.901 Protection Information Capabilities: 00:08:50.901 16b Guard Protection Information Storage Tag Support: No 00:08:50.901 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:50.901 Storage Tag Check Read Support: No 00:08:50.901 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.901 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.901 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.901 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.901 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.901 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.901 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.901 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.901 13:04:43 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:50.901 13:04:43 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:51.160 ===================================================== 00:08:51.160 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:51.160 ===================================================== 00:08:51.160 Controller Capabilities/Features 00:08:51.160 ================================ 00:08:51.160 Vendor ID: 1b36 00:08:51.160 Subsystem Vendor ID: 1af4 00:08:51.160 Serial Number: 12343 00:08:51.160 Model Number: QEMU NVMe Ctrl 00:08:51.160 Firmware Version: 8.0.0 00:08:51.160 Recommended Arb Burst: 6 00:08:51.160 IEEE OUI Identifier: 00 54 52 00:08:51.160 Multi-path I/O 00:08:51.160 May have multiple subsystem ports: No 00:08:51.160 May have multiple controllers: Yes 00:08:51.160 Associated with SR-IOV VF: No 00:08:51.160 Max Data Transfer Size: 524288 00:08:51.160 Max Number of Namespaces: 256 00:08:51.160 Max Number of I/O Queues: 64 00:08:51.160 NVMe Specification Version (VS): 1.4 00:08:51.160 NVMe Specification Version (Identify): 1.4 00:08:51.160 Maximum Queue Entries: 2048 00:08:51.160 Contiguous Queues Required: Yes 00:08:51.160 Arbitration Mechanisms Supported 00:08:51.160 Weighted Round Robin: Not Supported 00:08:51.160 Vendor Specific: Not Supported 00:08:51.160 Reset Timeout: 7500 ms 00:08:51.160 Doorbell Stride: 4 bytes 00:08:51.160 NVM Subsystem Reset: Not Supported 00:08:51.160 Command Sets Supported 00:08:51.160 NVM Command Set: Supported 00:08:51.160 Boot Partition: Not Supported 00:08:51.160 Memory Page Size Minimum: 4096 bytes 00:08:51.160 Memory Page Size Maximum: 65536 bytes 00:08:51.160 Persistent Memory Region: Not Supported 00:08:51.160 Optional Asynchronous Events Supported 00:08:51.160 Namespace Attribute Notices: Supported 00:08:51.160 Firmware Activation Notices: Not Supported 00:08:51.160 ANA Change Notices: Not Supported 00:08:51.160 PLE Aggregate Log Change Notices: Not Supported 00:08:51.160 LBA Status Info Alert Notices: Not Supported 00:08:51.160 EGE Aggregate Log Change Notices: Not Supported 00:08:51.160 Normal NVM Subsystem Shutdown event: Not Supported 00:08:51.160 Zone Descriptor Change Notices: Not Supported 00:08:51.160 Discovery Log Change Notices: Not Supported 00:08:51.160 Controller Attributes 00:08:51.160 128-bit Host Identifier: Not Supported 00:08:51.160 Non-Operational Permissive Mode: Not Supported 00:08:51.160 NVM Sets: Not Supported 00:08:51.160 Read Recovery Levels: Not Supported 00:08:51.160 Endurance Groups: Supported 00:08:51.160 Predictable Latency Mode: Not Supported 00:08:51.160 Traffic Based Keep ALive: Not Supported 00:08:51.160 Namespace Granularity: Not Supported 00:08:51.160 SQ Associations: Not Supported 00:08:51.160 UUID List: Not Supported 00:08:51.160 Multi-Domain Subsystem: Not Supported 00:08:51.160 Fixed Capacity Management: Not Supported 00:08:51.160 Variable Capacity Management: Not Supported 00:08:51.160 Delete Endurance Group: Not Supported 00:08:51.160 Delete NVM Set: Not Supported 00:08:51.160 Extended LBA Formats Supported: Supported 00:08:51.160 Flexible Data Placement Supported: Supported 00:08:51.160 00:08:51.160 Controller Memory Buffer Support 00:08:51.160 ================================ 00:08:51.160 Supported: No 00:08:51.160 00:08:51.160 Persistent Memory Region Support 00:08:51.160 ================================ 00:08:51.160 Supported: No 00:08:51.160 00:08:51.160 Admin Command Set Attributes 00:08:51.160 ============================ 00:08:51.160 Security Send/Receive: Not Supported 00:08:51.160 Format NVM: Supported 00:08:51.160 Firmware Activate/Download: Not Supported 00:08:51.160 Namespace Management: Supported 00:08:51.160 Device Self-Test: Not Supported 00:08:51.160 Directives: Supported 00:08:51.160 NVMe-MI: Not Supported 00:08:51.161 Virtualization Management: Not Supported 00:08:51.161 Doorbell Buffer Config: Supported 00:08:51.161 Get LBA Status Capability: Not Supported 00:08:51.161 Command & Feature Lockdown Capability: Not Supported 00:08:51.161 Abort Command Limit: 4 00:08:51.161 Async Event Request Limit: 4 00:08:51.161 Number of Firmware Slots: N/A 00:08:51.161 Firmware Slot 1 Read-Only: N/A 00:08:51.161 Firmware Activation Without Reset: N/A 00:08:51.161 Multiple Update Detection Support: N/A 00:08:51.161 Firmware Update Granularity: No Information Provided 00:08:51.161 Per-Namespace SMART Log: Yes 00:08:51.161 Asymmetric Namespace Access Log Page: Not Supported 00:08:51.161 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:51.161 Command Effects Log Page: Supported 00:08:51.161 Get Log Page Extended Data: Supported 00:08:51.161 Telemetry Log Pages: Not Supported 00:08:51.161 Persistent Event Log Pages: Not Supported 00:08:51.161 Supported Log Pages Log Page: May Support 00:08:51.161 Commands Supported & Effects Log Page: Not Supported 00:08:51.161 Feature Identifiers & Effects Log Page:May Support 00:08:51.161 NVMe-MI Commands & Effects Log Page: May Support 00:08:51.161 Data Area 4 for Telemetry Log: Not Supported 00:08:51.161 Error Log Page Entries Supported: 1 00:08:51.161 Keep Alive: Not Supported 00:08:51.161 00:08:51.161 NVM Command Set Attributes 00:08:51.161 ========================== 00:08:51.161 Submission Queue Entry Size 00:08:51.161 Max: 64 00:08:51.161 Min: 64 00:08:51.161 Completion Queue Entry Size 00:08:51.161 Max: 16 00:08:51.161 Min: 16 00:08:51.161 Number of Namespaces: 256 00:08:51.161 Compare Command: Supported 00:08:51.161 Write Uncorrectable Command: Not Supported 00:08:51.161 Dataset Management Command: Supported 00:08:51.161 Write Zeroes Command: Supported 00:08:51.161 Set Features Save Field: Supported 00:08:51.161 Reservations: Not Supported 00:08:51.161 Timestamp: Supported 00:08:51.161 Copy: Supported 00:08:51.161 Volatile Write Cache: Present 00:08:51.161 Atomic Write Unit (Normal): 1 00:08:51.161 Atomic Write Unit (PFail): 1 00:08:51.161 Atomic Compare & Write Unit: 1 00:08:51.161 Fused Compare & Write: Not Supported 00:08:51.161 Scatter-Gather List 00:08:51.161 SGL Command Set: Supported 00:08:51.161 SGL Keyed: Not Supported 00:08:51.161 SGL Bit Bucket Descriptor: Not Supported 00:08:51.161 SGL Metadata Pointer: Not Supported 00:08:51.161 Oversized SGL: Not Supported 00:08:51.161 SGL Metadata Address: Not Supported 00:08:51.161 SGL Offset: Not Supported 00:08:51.161 Transport SGL Data Block: Not Supported 00:08:51.161 Replay Protected Memory Block: Not Supported 00:08:51.161 00:08:51.161 Firmware Slot Information 00:08:51.161 ========================= 00:08:51.161 Active slot: 1 00:08:51.161 Slot 1 Firmware Revision: 1.0 00:08:51.161 00:08:51.161 00:08:51.161 Commands Supported and Effects 00:08:51.161 ============================== 00:08:51.161 Admin Commands 00:08:51.161 -------------- 00:08:51.161 Delete I/O Submission Queue (00h): Supported 00:08:51.161 Create I/O Submission Queue (01h): Supported 00:08:51.161 Get Log Page (02h): Supported 00:08:51.161 Delete I/O Completion Queue (04h): Supported 00:08:51.161 Create I/O Completion Queue (05h): Supported 00:08:51.161 Identify (06h): Supported 00:08:51.161 Abort (08h): Supported 00:08:51.161 Set Features (09h): Supported 00:08:51.161 Get Features (0Ah): Supported 00:08:51.161 Asynchronous Event Request (0Ch): Supported 00:08:51.161 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:51.161 Directive Send (19h): Supported 00:08:51.161 Directive Receive (1Ah): Supported 00:08:51.161 Virtualization Management (1Ch): Supported 00:08:51.161 Doorbell Buffer Config (7Ch): Supported 00:08:51.161 Format NVM (80h): Supported LBA-Change 00:08:51.161 I/O Commands 00:08:51.161 ------------ 00:08:51.161 Flush (00h): Supported LBA-Change 00:08:51.161 Write (01h): Supported LBA-Change 00:08:51.161 Read (02h): Supported 00:08:51.161 Compare (05h): Supported 00:08:51.161 Write Zeroes (08h): Supported LBA-Change 00:08:51.161 Dataset Management (09h): Supported LBA-Change 00:08:51.161 Unknown (0Ch): Supported 00:08:51.161 Unknown (12h): Supported 00:08:51.161 Copy (19h): Supported LBA-Change 00:08:51.161 Unknown (1Dh): Supported LBA-Change 00:08:51.161 00:08:51.161 Error Log 00:08:51.161 ========= 00:08:51.161 00:08:51.161 Arbitration 00:08:51.161 =========== 00:08:51.161 Arbitration Burst: no limit 00:08:51.161 00:08:51.161 Power Management 00:08:51.161 ================ 00:08:51.161 Number of Power States: 1 00:08:51.161 Current Power State: Power State #0 00:08:51.161 Power State #0: 00:08:51.161 Max Power: 25.00 W 00:08:51.161 Non-Operational State: Operational 00:08:51.161 Entry Latency: 16 microseconds 00:08:51.161 Exit Latency: 4 microseconds 00:08:51.161 Relative Read Throughput: 0 00:08:51.161 Relative Read Latency: 0 00:08:51.161 Relative Write Throughput: 0 00:08:51.161 Relative Write Latency: 0 00:08:51.161 Idle Power: Not Reported 00:08:51.161 Active Power: Not Reported 00:08:51.161 Non-Operational Permissive Mode: Not Supported 00:08:51.161 00:08:51.161 Health Information 00:08:51.161 ================== 00:08:51.161 Critical Warnings: 00:08:51.161 Available Spare Space: OK 00:08:51.161 Temperature: OK 00:08:51.161 Device Reliability: OK 00:08:51.161 Read Only: No 00:08:51.161 Volatile Memory Backup: OK 00:08:51.161 Current Temperature: 323 Kelvin (50 Celsius) 00:08:51.161 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:51.161 Available Spare: 0% 00:08:51.161 Available Spare Threshold: 0% 00:08:51.161 Life Percentage Used: 0% 00:08:51.161 Data Units Read: 729 00:08:51.161 Data Units Written: 622 00:08:51.161 Host Read Commands: 33925 00:08:51.161 Host Write Commands: 32515 00:08:51.161 Controller Busy Time: 0 minutes 00:08:51.161 Power Cycles: 0 00:08:51.161 Power On Hours: 0 hours 00:08:51.161 Unsafe Shutdowns: 0 00:08:51.161 Unrecoverable Media Errors: 0 00:08:51.161 Lifetime Error Log Entries: 0 00:08:51.161 Warning Temperature Time: 0 minutes 00:08:51.161 Critical Temperature Time: 0 minutes 00:08:51.161 00:08:51.161 Number of Queues 00:08:51.161 ================ 00:08:51.161 Number of I/O Submission Queues: 64 00:08:51.161 Number of I/O Completion Queues: 64 00:08:51.161 00:08:51.161 ZNS Specific Controller Data 00:08:51.161 ============================ 00:08:51.161 Zone Append Size Limit: 0 00:08:51.161 00:08:51.161 00:08:51.161 Active Namespaces 00:08:51.161 ================= 00:08:51.161 Namespace ID:1 00:08:51.161 Error Recovery Timeout: Unlimited 00:08:51.161 Command Set Identifier: NVM (00h) 00:08:51.161 Deallocate: Supported 00:08:51.161 Deallocated/Unwritten Error: Supported 00:08:51.161 Deallocated Read Value: All 0x00 00:08:51.161 Deallocate in Write Zeroes: Not Supported 00:08:51.161 Deallocated Guard Field: 0xFFFF 00:08:51.161 Flush: Supported 00:08:51.161 Reservation: Not Supported 00:08:51.161 Namespace Sharing Capabilities: Multiple Controllers 00:08:51.161 Size (in LBAs): 262144 (1GiB) 00:08:51.161 Capacity (in LBAs): 262144 (1GiB) 00:08:51.161 Utilization (in LBAs): 262144 (1GiB) 00:08:51.161 Thin Provisioning: Not Supported 00:08:51.161 Per-NS Atomic Units: No 00:08:51.161 Maximum Single Source Range Length: 128 00:08:51.161 Maximum Copy Length: 128 00:08:51.161 Maximum Source Range Count: 128 00:08:51.161 NGUID/EUI64 Never Reused: No 00:08:51.161 Namespace Write Protected: No 00:08:51.161 Endurance group ID: 1 00:08:51.161 Number of LBA Formats: 8 00:08:51.161 Current LBA Format: LBA Format #04 00:08:51.162 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:51.162 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:51.162 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:51.162 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:51.162 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:51.162 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:51.162 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:51.162 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:51.162 00:08:51.162 Get Feature FDP: 00:08:51.162 ================ 00:08:51.162 Enabled: Yes 00:08:51.162 FDP configuration index: 0 00:08:51.162 00:08:51.162 FDP configurations log page 00:08:51.162 =========================== 00:08:51.162 Number of FDP configurations: 1 00:08:51.162 Version: 0 00:08:51.162 Size: 112 00:08:51.162 FDP Configuration Descriptor: 0 00:08:51.162 Descriptor Size: 96 00:08:51.162 Reclaim Group Identifier format: 2 00:08:51.162 FDP Volatile Write Cache: Not Present 00:08:51.162 FDP Configuration: Valid 00:08:51.162 Vendor Specific Size: 0 00:08:51.162 Number of Reclaim Groups: 2 00:08:51.162 Number of Recalim Unit Handles: 8 00:08:51.162 Max Placement Identifiers: 128 00:08:51.162 Number of Namespaces Suppprted: 256 00:08:51.162 Reclaim unit Nominal Size: 6000000 bytes 00:08:51.162 Estimated Reclaim Unit Time Limit: Not Reported 00:08:51.162 RUH Desc #000: RUH Type: Initially Isolated 00:08:51.162 RUH Desc #001: RUH Type: Initially Isolated 00:08:51.162 RUH Desc #002: RUH Type: Initially Isolated 00:08:51.162 RUH Desc #003: RUH Type: Initially Isolated 00:08:51.162 RUH Desc #004: RUH Type: Initially Isolated 00:08:51.162 RUH Desc #005: RUH Type: Initially Isolated 00:08:51.162 RUH Desc #006: RUH Type: Initially Isolated 00:08:51.162 RUH Desc #007: RUH Type: Initially Isolated 00:08:51.162 00:08:51.162 FDP reclaim unit handle usage log page 00:08:51.420 ====================================== 00:08:51.420 Number of Reclaim Unit Handles: 8 00:08:51.420 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:51.420 RUH Usage Desc #001: RUH Attributes: Unused 00:08:51.420 RUH Usage Desc #002: RUH Attributes: Unused 00:08:51.420 RUH Usage Desc #003: RUH Attributes: Unused 00:08:51.420 RUH Usage Desc #004: RUH Attributes: Unused 00:08:51.420 RUH Usage Desc #005: RUH Attributes: Unused 00:08:51.420 RUH Usage Desc #006: RUH Attributes: Unused 00:08:51.420 RUH Usage Desc #007: RUH Attributes: Unused 00:08:51.420 00:08:51.420 FDP statistics log page 00:08:51.420 ======================= 00:08:51.420 Host bytes with metadata written: 383033344 00:08:51.420 Media bytes with metadata written: 383074304 00:08:51.420 Media bytes erased: 0 00:08:51.420 00:08:51.420 FDP events log page 00:08:51.420 =================== 00:08:51.420 Number of FDP events: 0 00:08:51.420 00:08:51.420 NVM Specific Namespace Data 00:08:51.420 =========================== 00:08:51.420 Logical Block Storage Tag Mask: 0 00:08:51.420 Protection Information Capabilities: 00:08:51.420 16b Guard Protection Information Storage Tag Support: No 00:08:51.420 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:51.420 Storage Tag Check Read Support: No 00:08:51.420 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.420 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.420 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.420 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.420 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.420 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.420 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.420 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.420 00:08:51.420 real 0m1.569s 00:08:51.420 user 0m0.632s 00:08:51.420 sys 0m0.732s 00:08:51.420 ************************************ 00:08:51.420 END TEST nvme_identify 00:08:51.420 ************************************ 00:08:51.420 13:04:43 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.420 13:04:43 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:51.420 13:04:43 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:51.420 13:04:43 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:51.420 13:04:43 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.420 13:04:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:51.420 ************************************ 00:08:51.420 START TEST nvme_perf 00:08:51.420 ************************************ 00:08:51.420 13:04:43 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:08:51.420 13:04:43 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:52.794 Initializing NVMe Controllers 00:08:52.794 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:52.794 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:52.794 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:52.794 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:52.794 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:52.794 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:52.794 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:52.794 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:52.794 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:52.794 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:52.794 Initialization complete. Launching workers. 00:08:52.794 ======================================================== 00:08:52.794 Latency(us) 00:08:52.794 Device Information : IOPS MiB/s Average min max 00:08:52.794 PCIE (0000:00:10.0) NSID 1 from core 0: 13677.34 160.28 9373.59 7445.69 33870.43 00:08:52.794 PCIE (0000:00:11.0) NSID 1 from core 0: 13677.34 160.28 9357.25 7248.97 31805.43 00:08:52.794 PCIE (0000:00:13.0) NSID 1 from core 0: 13677.34 160.28 9338.85 7510.80 29996.60 00:08:52.794 PCIE (0000:00:12.0) NSID 1 from core 0: 13677.34 160.28 9320.66 7517.59 27875.21 00:08:52.794 PCIE (0000:00:12.0) NSID 2 from core 0: 13677.34 160.28 9302.13 7550.62 25826.83 00:08:52.794 PCIE (0000:00:12.0) NSID 3 from core 0: 13677.34 160.28 9283.72 7541.82 23516.87 00:08:52.794 ======================================================== 00:08:52.794 Total : 82064.06 961.69 9329.37 7248.97 33870.43 00:08:52.794 00:08:52.794 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:52.794 ================================================================================= 00:08:52.794 1.00000% : 7685.585us 00:08:52.794 10.00000% : 8043.055us 00:08:52.794 25.00000% : 8400.524us 00:08:52.794 50.00000% : 8877.149us 00:08:52.794 75.00000% : 9770.822us 00:08:52.794 90.00000% : 11141.120us 00:08:52.794 95.00000% : 11856.058us 00:08:52.794 98.00000% : 12928.465us 00:08:52.794 99.00000% : 15132.858us 00:08:52.794 99.50000% : 27286.807us 00:08:52.794 99.90000% : 33602.095us 00:08:52.794 99.99000% : 33840.407us 00:08:52.794 99.99900% : 34078.720us 00:08:52.794 99.99990% : 34078.720us 00:08:52.794 99.99999% : 34078.720us 00:08:52.794 00:08:52.794 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:52.794 ================================================================================= 00:08:52.794 1.00000% : 7685.585us 00:08:52.794 10.00000% : 8043.055us 00:08:52.794 25.00000% : 8400.524us 00:08:52.794 50.00000% : 8817.571us 00:08:52.794 75.00000% : 9711.244us 00:08:52.794 90.00000% : 11200.698us 00:08:52.794 95.00000% : 11915.636us 00:08:52.794 98.00000% : 12749.731us 00:08:52.794 99.00000% : 14954.124us 00:08:52.794 99.50000% : 25499.462us 00:08:52.794 99.90000% : 31457.280us 00:08:52.794 99.99000% : 31933.905us 00:08:52.794 99.99900% : 31933.905us 00:08:52.794 99.99990% : 31933.905us 00:08:52.794 99.99999% : 31933.905us 00:08:52.794 00:08:52.794 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:52.794 ================================================================================= 00:08:52.794 1.00000% : 7745.164us 00:08:52.794 10.00000% : 8043.055us 00:08:52.794 25.00000% : 8400.524us 00:08:52.794 50.00000% : 8817.571us 00:08:52.794 75.00000% : 9770.822us 00:08:52.794 90.00000% : 11021.964us 00:08:52.794 95.00000% : 11796.480us 00:08:52.794 98.00000% : 12749.731us 00:08:52.794 99.00000% : 15371.171us 00:08:52.794 99.50000% : 23712.116us 00:08:52.794 99.90000% : 29669.935us 00:08:52.794 99.99000% : 30027.404us 00:08:52.794 99.99900% : 30027.404us 00:08:52.794 99.99990% : 30027.404us 00:08:52.794 99.99999% : 30027.404us 00:08:52.794 00:08:52.794 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:52.794 ================================================================================= 00:08:52.794 1.00000% : 7745.164us 00:08:52.794 10.00000% : 8102.633us 00:08:52.794 25.00000% : 8400.524us 00:08:52.794 50.00000% : 8877.149us 00:08:52.794 75.00000% : 9770.822us 00:08:52.794 90.00000% : 10962.385us 00:08:52.794 95.00000% : 11736.902us 00:08:52.794 98.00000% : 12690.153us 00:08:52.794 99.00000% : 15192.436us 00:08:52.794 99.50000% : 21567.302us 00:08:52.794 99.90000% : 27525.120us 00:08:52.794 99.99000% : 27882.589us 00:08:52.794 99.99900% : 27882.589us 00:08:52.794 99.99990% : 27882.589us 00:08:52.794 99.99999% : 27882.589us 00:08:52.794 00:08:52.794 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:52.794 ================================================================================= 00:08:52.794 1.00000% : 7804.742us 00:08:52.794 10.00000% : 8102.633us 00:08:52.794 25.00000% : 8400.524us 00:08:52.794 50.00000% : 8877.149us 00:08:52.794 75.00000% : 9770.822us 00:08:52.794 90.00000% : 11021.964us 00:08:52.794 95.00000% : 11736.902us 00:08:52.794 98.00000% : 12630.575us 00:08:52.794 99.00000% : 15073.280us 00:08:52.794 99.50000% : 19422.487us 00:08:52.794 99.90000% : 25380.305us 00:08:52.794 99.99000% : 25856.931us 00:08:52.794 99.99900% : 25856.931us 00:08:52.794 99.99990% : 25856.931us 00:08:52.794 99.99999% : 25856.931us 00:08:52.794 00:08:52.794 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:52.794 ================================================================================= 00:08:52.794 1.00000% : 7745.164us 00:08:52.794 10.00000% : 8102.633us 00:08:52.794 25.00000% : 8400.524us 00:08:52.794 50.00000% : 8817.571us 00:08:52.794 75.00000% : 9770.822us 00:08:52.794 90.00000% : 11081.542us 00:08:52.794 95.00000% : 11796.480us 00:08:52.794 98.00000% : 12690.153us 00:08:52.794 99.00000% : 15013.702us 00:08:52.794 99.50000% : 17277.673us 00:08:52.794 99.90000% : 23235.491us 00:08:52.794 99.99000% : 23592.960us 00:08:52.794 99.99900% : 23592.960us 00:08:52.794 99.99990% : 23592.960us 00:08:52.794 99.99999% : 23592.960us 00:08:52.794 00:08:52.794 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:52.794 ============================================================================== 00:08:52.794 Range in us Cumulative IO count 00:08:52.794 7417.484 - 7447.273: 0.0073% ( 1) 00:08:52.795 7447.273 - 7477.062: 0.0365% ( 4) 00:08:52.795 7477.062 - 7506.851: 0.0876% ( 7) 00:08:52.795 7506.851 - 7536.640: 0.2044% ( 16) 00:08:52.795 7536.640 - 7566.429: 0.4235% ( 30) 00:08:52.795 7566.429 - 7596.218: 0.6352% ( 29) 00:08:52.795 7596.218 - 7626.007: 0.8835% ( 34) 00:08:52.795 7626.007 - 7685.585: 1.5333% ( 89) 00:08:52.795 7685.585 - 7745.164: 2.6358% ( 151) 00:08:52.795 7745.164 - 7804.742: 3.9501% ( 180) 00:08:52.795 7804.742 - 7864.320: 5.5564% ( 220) 00:08:52.795 7864.320 - 7923.898: 7.3306% ( 243) 00:08:52.795 7923.898 - 7983.476: 9.2290% ( 260) 00:08:52.795 7983.476 - 8043.055: 11.3464% ( 290) 00:08:52.795 8043.055 - 8102.633: 13.6901% ( 321) 00:08:52.795 8102.633 - 8162.211: 16.2967% ( 357) 00:08:52.795 8162.211 - 8221.789: 19.0055% ( 371) 00:08:52.795 8221.789 - 8281.367: 21.7509% ( 376) 00:08:52.795 8281.367 - 8340.945: 24.5400% ( 382) 00:08:52.795 8340.945 - 8400.524: 27.3949% ( 391) 00:08:52.795 8400.524 - 8460.102: 30.5345% ( 430) 00:08:52.795 8460.102 - 8519.680: 33.5426% ( 412) 00:08:52.795 8519.680 - 8579.258: 36.8356% ( 451) 00:08:52.795 8579.258 - 8638.836: 40.0847% ( 445) 00:08:52.795 8638.836 - 8698.415: 43.3484% ( 447) 00:08:52.795 8698.415 - 8757.993: 46.4150% ( 420) 00:08:52.795 8757.993 - 8817.571: 49.3721% ( 405) 00:08:52.795 8817.571 - 8877.149: 52.2123% ( 389) 00:08:52.795 8877.149 - 8936.727: 54.5999% ( 327) 00:08:52.795 8936.727 - 8996.305: 56.9217% ( 318) 00:08:52.795 8996.305 - 9055.884: 59.0026% ( 285) 00:08:52.795 9055.884 - 9115.462: 60.9959% ( 273) 00:08:52.795 9115.462 - 9175.040: 62.8067% ( 248) 00:08:52.795 9175.040 - 9234.618: 64.4787% ( 229) 00:08:52.795 9234.618 - 9294.196: 66.1507% ( 229) 00:08:52.795 9294.196 - 9353.775: 67.7497% ( 219) 00:08:52.795 9353.775 - 9413.353: 69.2027% ( 199) 00:08:52.795 9413.353 - 9472.931: 70.5680% ( 187) 00:08:52.795 9472.931 - 9532.509: 71.7655% ( 164) 00:08:52.795 9532.509 - 9592.087: 72.8972% ( 155) 00:08:52.795 9592.087 - 9651.665: 73.8975% ( 137) 00:08:52.795 9651.665 - 9711.244: 74.8613% ( 132) 00:08:52.795 9711.244 - 9770.822: 75.7155% ( 117) 00:08:52.795 9770.822 - 9830.400: 76.5771% ( 118) 00:08:52.795 9830.400 - 9889.978: 77.3072% ( 100) 00:08:52.795 9889.978 - 9949.556: 78.0739% ( 105) 00:08:52.795 9949.556 - 10009.135: 78.8697% ( 109) 00:08:52.795 10009.135 - 10068.713: 79.7240% ( 117) 00:08:52.795 10068.713 - 10128.291: 80.4322% ( 97) 00:08:52.795 10128.291 - 10187.869: 81.1770% ( 102) 00:08:52.795 10187.869 - 10247.447: 81.8560% ( 93) 00:08:52.795 10247.447 - 10307.025: 82.4912% ( 87) 00:08:52.795 10307.025 - 10366.604: 83.1922% ( 96) 00:08:52.795 10366.604 - 10426.182: 83.8201% ( 86) 00:08:52.795 10426.182 - 10485.760: 84.4991% ( 93) 00:08:52.795 10485.760 - 10545.338: 85.1124% ( 84) 00:08:52.795 10545.338 - 10604.916: 85.6162% ( 69) 00:08:52.795 10604.916 - 10664.495: 86.1711% ( 76) 00:08:52.795 10664.495 - 10724.073: 86.7991% ( 86) 00:08:52.795 10724.073 - 10783.651: 87.3321% ( 73) 00:08:52.795 10783.651 - 10843.229: 87.7994% ( 64) 00:08:52.795 10843.229 - 10902.807: 88.3032% ( 69) 00:08:52.795 10902.807 - 10962.385: 88.8654% ( 77) 00:08:52.795 10962.385 - 11021.964: 89.3327% ( 64) 00:08:52.795 11021.964 - 11081.542: 89.8145% ( 66) 00:08:52.795 11081.542 - 11141.120: 90.3110% ( 68) 00:08:52.795 11141.120 - 11200.698: 90.7345% ( 58) 00:08:52.795 11200.698 - 11260.276: 91.2018% ( 64) 00:08:52.795 11260.276 - 11319.855: 91.6910% ( 67) 00:08:52.795 11319.855 - 11379.433: 92.1583% ( 64) 00:08:52.795 11379.433 - 11439.011: 92.6329% ( 65) 00:08:52.795 11439.011 - 11498.589: 93.0491% ( 57) 00:08:52.795 11498.589 - 11558.167: 93.4725% ( 58) 00:08:52.795 11558.167 - 11617.745: 93.8668% ( 54) 00:08:52.795 11617.745 - 11677.324: 94.2027% ( 46) 00:08:52.795 11677.324 - 11736.902: 94.5824% ( 52) 00:08:52.795 11736.902 - 11796.480: 94.9766% ( 54) 00:08:52.795 11796.480 - 11856.058: 95.3052% ( 45) 00:08:52.795 11856.058 - 11915.636: 95.6265% ( 44) 00:08:52.795 11915.636 - 11975.215: 95.9404% ( 43) 00:08:52.795 11975.215 - 12034.793: 96.1960% ( 35) 00:08:52.795 12034.793 - 12094.371: 96.4004% ( 28) 00:08:52.795 12094.371 - 12153.949: 96.6048% ( 28) 00:08:52.795 12153.949 - 12213.527: 96.7947% ( 26) 00:08:52.795 12213.527 - 12273.105: 96.9480% ( 21) 00:08:52.795 12273.105 - 12332.684: 97.1013% ( 21) 00:08:52.795 12332.684 - 12392.262: 97.2547% ( 21) 00:08:52.795 12392.262 - 12451.840: 97.3934% ( 19) 00:08:52.795 12451.840 - 12511.418: 97.5248% ( 18) 00:08:52.795 12511.418 - 12570.996: 97.6343% ( 15) 00:08:52.795 12570.996 - 12630.575: 97.7220% ( 12) 00:08:52.795 12630.575 - 12690.153: 97.8023% ( 11) 00:08:52.795 12690.153 - 12749.731: 97.8826% ( 11) 00:08:52.795 12749.731 - 12809.309: 97.9556% ( 10) 00:08:52.795 12809.309 - 12868.887: 97.9994% ( 6) 00:08:52.795 12868.887 - 12928.465: 98.0359% ( 5) 00:08:52.795 12928.465 - 12988.044: 98.0651% ( 4) 00:08:52.795 12988.044 - 13047.622: 98.0870% ( 3) 00:08:52.795 13047.622 - 13107.200: 98.1016% ( 2) 00:08:52.795 13107.200 - 13166.778: 98.1235% ( 3) 00:08:52.795 13166.778 - 13226.356: 98.1308% ( 1) 00:08:52.795 13464.669 - 13524.247: 98.1381% ( 1) 00:08:52.795 13524.247 - 13583.825: 98.1600% ( 3) 00:08:52.795 13583.825 - 13643.404: 98.1746% ( 2) 00:08:52.795 13643.404 - 13702.982: 98.1966% ( 3) 00:08:52.795 13702.982 - 13762.560: 98.2258% ( 4) 00:08:52.795 13762.560 - 13822.138: 98.2696% ( 6) 00:08:52.795 13822.138 - 13881.716: 98.3134% ( 6) 00:08:52.795 13881.716 - 13941.295: 98.3426% ( 4) 00:08:52.795 13941.295 - 14000.873: 98.3718% ( 4) 00:08:52.795 14000.873 - 14060.451: 98.4156% ( 6) 00:08:52.795 14060.451 - 14120.029: 98.4521% ( 5) 00:08:52.795 14120.029 - 14179.607: 98.4813% ( 4) 00:08:52.795 14179.607 - 14239.185: 98.5251% ( 6) 00:08:52.795 14239.185 - 14298.764: 98.5543% ( 4) 00:08:52.795 14298.764 - 14358.342: 98.5762% ( 3) 00:08:52.795 14358.342 - 14417.920: 98.6127% ( 5) 00:08:52.795 14417.920 - 14477.498: 98.6346% ( 3) 00:08:52.795 14477.498 - 14537.076: 98.6784% ( 6) 00:08:52.795 14537.076 - 14596.655: 98.7077% ( 4) 00:08:52.795 14596.655 - 14656.233: 98.7442% ( 5) 00:08:52.795 14656.233 - 14715.811: 98.7880% ( 6) 00:08:52.795 14715.811 - 14775.389: 98.8172% ( 4) 00:08:52.795 14775.389 - 14834.967: 98.8464% ( 4) 00:08:52.795 14834.967 - 14894.545: 98.8829% ( 5) 00:08:52.795 14894.545 - 14954.124: 98.9121% ( 4) 00:08:52.795 14954.124 - 15013.702: 98.9559% ( 6) 00:08:52.795 15013.702 - 15073.280: 98.9997% ( 6) 00:08:52.795 15073.280 - 15132.858: 99.0216% ( 3) 00:08:52.795 15132.858 - 15192.436: 99.0435% ( 3) 00:08:52.795 15192.436 - 15252.015: 99.0581% ( 2) 00:08:52.795 15252.015 - 15371.171: 99.0654% ( 1) 00:08:52.795 25022.836 - 25141.993: 99.0727% ( 1) 00:08:52.795 25141.993 - 25261.149: 99.0873% ( 2) 00:08:52.795 25261.149 - 25380.305: 99.0946% ( 1) 00:08:52.795 25380.305 - 25499.462: 99.1238% ( 4) 00:08:52.795 25499.462 - 25618.618: 99.1603% ( 5) 00:08:52.795 25618.618 - 25737.775: 99.1895% ( 4) 00:08:52.795 25737.775 - 25856.931: 99.2041% ( 2) 00:08:52.795 25856.931 - 25976.087: 99.2334% ( 4) 00:08:52.795 25976.087 - 26095.244: 99.2553% ( 3) 00:08:52.795 26095.244 - 26214.400: 99.2772% ( 3) 00:08:52.795 26214.400 - 26333.556: 99.2991% ( 3) 00:08:52.795 26333.556 - 26452.713: 99.3137% ( 2) 00:08:52.795 26452.713 - 26571.869: 99.3502% ( 5) 00:08:52.795 26571.869 - 26691.025: 99.3721% ( 3) 00:08:52.795 26691.025 - 26810.182: 99.4013% ( 4) 00:08:52.795 26810.182 - 26929.338: 99.4305% ( 4) 00:08:52.795 26929.338 - 27048.495: 99.4524% ( 3) 00:08:52.795 27048.495 - 27167.651: 99.4816% ( 4) 00:08:52.795 27167.651 - 27286.807: 99.5035% ( 3) 00:08:52.795 27286.807 - 27405.964: 99.5254% ( 3) 00:08:52.795 27405.964 - 27525.120: 99.5327% ( 1) 00:08:52.795 31457.280 - 31695.593: 99.5473% ( 2) 00:08:52.795 31695.593 - 31933.905: 99.5984% ( 7) 00:08:52.795 31933.905 - 32172.218: 99.6495% ( 7) 00:08:52.795 32172.218 - 32410.531: 99.6787% ( 4) 00:08:52.795 32410.531 - 32648.844: 99.7371% ( 8) 00:08:52.795 32648.844 - 32887.156: 99.7883% ( 7) 00:08:52.795 32887.156 - 33125.469: 99.8394% ( 7) 00:08:52.795 33125.469 - 33363.782: 99.8905% ( 7) 00:08:52.795 33363.782 - 33602.095: 99.9489% ( 8) 00:08:52.795 33602.095 - 33840.407: 99.9927% ( 6) 00:08:52.795 33840.407 - 34078.720: 100.0000% ( 1) 00:08:52.795 00:08:52.795 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:52.795 ============================================================================== 00:08:52.795 Range in us Cumulative IO count 00:08:52.795 7238.749 - 7268.538: 0.0146% ( 2) 00:08:52.795 7268.538 - 7298.327: 0.0292% ( 2) 00:08:52.795 7298.327 - 7328.116: 0.0511% ( 3) 00:08:52.795 7328.116 - 7357.905: 0.0657% ( 2) 00:08:52.795 7357.905 - 7387.695: 0.0876% ( 3) 00:08:52.795 7387.695 - 7417.484: 0.1168% ( 4) 00:08:52.795 7417.484 - 7447.273: 0.1387% ( 3) 00:08:52.795 7447.273 - 7477.062: 0.1679% ( 4) 00:08:52.795 7477.062 - 7506.851: 0.1971% ( 4) 00:08:52.795 7506.851 - 7536.640: 0.2555% ( 8) 00:08:52.795 7536.640 - 7566.429: 0.3140% ( 8) 00:08:52.795 7566.429 - 7596.218: 0.4381% ( 17) 00:08:52.795 7596.218 - 7626.007: 0.6060% ( 23) 00:08:52.796 7626.007 - 7685.585: 1.1171% ( 70) 00:08:52.796 7685.585 - 7745.164: 1.8400% ( 99) 00:08:52.796 7745.164 - 7804.742: 2.9790% ( 156) 00:08:52.796 7804.742 - 7864.320: 4.4393% ( 200) 00:08:52.796 7864.320 - 7923.898: 6.1113% ( 229) 00:08:52.796 7923.898 - 7983.476: 8.0754% ( 269) 00:08:52.796 7983.476 - 8043.055: 10.2804% ( 302) 00:08:52.796 8043.055 - 8102.633: 12.7555% ( 339) 00:08:52.796 8102.633 - 8162.211: 15.4206% ( 365) 00:08:52.796 8162.211 - 8221.789: 18.3119% ( 396) 00:08:52.796 8221.789 - 8281.367: 21.2690% ( 405) 00:08:52.796 8281.367 - 8340.945: 24.4524% ( 436) 00:08:52.796 8340.945 - 8400.524: 27.6650% ( 440) 00:08:52.796 8400.524 - 8460.102: 31.0383% ( 462) 00:08:52.796 8460.102 - 8519.680: 34.4261% ( 464) 00:08:52.796 8519.680 - 8579.258: 37.9162% ( 478) 00:08:52.796 8579.258 - 8638.836: 41.3697% ( 473) 00:08:52.796 8638.836 - 8698.415: 44.8233% ( 473) 00:08:52.796 8698.415 - 8757.993: 48.0943% ( 448) 00:08:52.796 8757.993 - 8817.571: 51.0660% ( 407) 00:08:52.796 8817.571 - 8877.149: 53.7310% ( 365) 00:08:52.796 8877.149 - 8936.727: 56.1989% ( 338) 00:08:52.796 8936.727 - 8996.305: 58.3309% ( 292) 00:08:52.796 8996.305 - 9055.884: 60.4410% ( 289) 00:08:52.796 9055.884 - 9115.462: 62.3394% ( 260) 00:08:52.796 9115.462 - 9175.040: 64.1355% ( 246) 00:08:52.796 9175.040 - 9234.618: 65.8221% ( 231) 00:08:52.796 9234.618 - 9294.196: 67.2897% ( 201) 00:08:52.796 9294.196 - 9353.775: 68.8449% ( 213) 00:08:52.796 9353.775 - 9413.353: 70.2030% ( 186) 00:08:52.796 9413.353 - 9472.931: 71.4807% ( 175) 00:08:52.796 9472.931 - 9532.509: 72.5978% ( 153) 00:08:52.796 9532.509 - 9592.087: 73.5981% ( 137) 00:08:52.796 9592.087 - 9651.665: 74.4816% ( 121) 00:08:52.796 9651.665 - 9711.244: 75.3432% ( 118) 00:08:52.796 9711.244 - 9770.822: 76.1317% ( 108) 00:08:52.796 9770.822 - 9830.400: 76.8619% ( 100) 00:08:52.796 9830.400 - 9889.978: 77.5774% ( 98) 00:08:52.796 9889.978 - 9949.556: 78.3002% ( 99) 00:08:52.796 9949.556 - 10009.135: 78.9647% ( 91) 00:08:52.796 10009.135 - 10068.713: 79.5707% ( 83) 00:08:52.796 10068.713 - 10128.291: 80.1913% ( 85) 00:08:52.796 10128.291 - 10187.869: 80.7316% ( 74) 00:08:52.796 10187.869 - 10247.447: 81.2792% ( 75) 00:08:52.796 10247.447 - 10307.025: 81.7757% ( 68) 00:08:52.796 10307.025 - 10366.604: 82.2941% ( 71) 00:08:52.796 10366.604 - 10426.182: 82.8490% ( 76) 00:08:52.796 10426.182 - 10485.760: 83.4258% ( 79) 00:08:52.796 10485.760 - 10545.338: 84.0245% ( 82) 00:08:52.796 10545.338 - 10604.916: 84.6159% ( 81) 00:08:52.796 10604.916 - 10664.495: 85.1928% ( 79) 00:08:52.796 10664.495 - 10724.073: 85.7477% ( 76) 00:08:52.796 10724.073 - 10783.651: 86.3172% ( 78) 00:08:52.796 10783.651 - 10843.229: 86.9305% ( 84) 00:08:52.796 10843.229 - 10902.807: 87.4781% ( 75) 00:08:52.796 10902.807 - 10962.385: 88.0622% ( 80) 00:08:52.796 10962.385 - 11021.964: 88.6390% ( 79) 00:08:52.796 11021.964 - 11081.542: 89.2012% ( 77) 00:08:52.796 11081.542 - 11141.120: 89.7342% ( 73) 00:08:52.796 11141.120 - 11200.698: 90.2234% ( 67) 00:08:52.796 11200.698 - 11260.276: 90.6980% ( 65) 00:08:52.796 11260.276 - 11319.855: 91.1726% ( 65) 00:08:52.796 11319.855 - 11379.433: 91.6837% ( 70) 00:08:52.796 11379.433 - 11439.011: 92.1583% ( 65) 00:08:52.796 11439.011 - 11498.589: 92.6037% ( 61) 00:08:52.796 11498.589 - 11558.167: 92.9907% ( 53) 00:08:52.796 11558.167 - 11617.745: 93.4287% ( 60) 00:08:52.796 11617.745 - 11677.324: 93.8595% ( 59) 00:08:52.796 11677.324 - 11736.902: 94.2465% ( 53) 00:08:52.796 11736.902 - 11796.480: 94.6116% ( 50) 00:08:52.796 11796.480 - 11856.058: 94.9839% ( 51) 00:08:52.796 11856.058 - 11915.636: 95.3417% ( 49) 00:08:52.796 11915.636 - 11975.215: 95.6776% ( 46) 00:08:52.796 11975.215 - 12034.793: 95.9842% ( 42) 00:08:52.796 12034.793 - 12094.371: 96.2471% ( 36) 00:08:52.796 12094.371 - 12153.949: 96.4661% ( 30) 00:08:52.796 12153.949 - 12213.527: 96.6633% ( 27) 00:08:52.796 12213.527 - 12273.105: 96.8604% ( 27) 00:08:52.796 12273.105 - 12332.684: 97.0794% ( 30) 00:08:52.796 12332.684 - 12392.262: 97.2766% ( 27) 00:08:52.796 12392.262 - 12451.840: 97.4737% ( 27) 00:08:52.796 12451.840 - 12511.418: 97.6270% ( 21) 00:08:52.796 12511.418 - 12570.996: 97.7585% ( 18) 00:08:52.796 12570.996 - 12630.575: 97.8461% ( 12) 00:08:52.796 12630.575 - 12690.153: 97.9337% ( 12) 00:08:52.796 12690.153 - 12749.731: 98.0140% ( 11) 00:08:52.796 12749.731 - 12809.309: 98.0797% ( 9) 00:08:52.796 12809.309 - 12868.887: 98.1308% ( 7) 00:08:52.796 13643.404 - 13702.982: 98.1527% ( 3) 00:08:52.796 13702.982 - 13762.560: 98.1673% ( 2) 00:08:52.796 13762.560 - 13822.138: 98.2039% ( 5) 00:08:52.796 13822.138 - 13881.716: 98.2404% ( 5) 00:08:52.796 13881.716 - 13941.295: 98.2915% ( 7) 00:08:52.796 13941.295 - 14000.873: 98.3426% ( 7) 00:08:52.796 14000.873 - 14060.451: 98.3864% ( 6) 00:08:52.796 14060.451 - 14120.029: 98.4375% ( 7) 00:08:52.796 14120.029 - 14179.607: 98.4886% ( 7) 00:08:52.796 14179.607 - 14239.185: 98.5324% ( 6) 00:08:52.796 14239.185 - 14298.764: 98.5762% ( 6) 00:08:52.796 14298.764 - 14358.342: 98.6273% ( 7) 00:08:52.796 14358.342 - 14417.920: 98.6784% ( 7) 00:08:52.796 14417.920 - 14477.498: 98.7150% ( 5) 00:08:52.796 14477.498 - 14537.076: 98.7661% ( 7) 00:08:52.796 14537.076 - 14596.655: 98.8026% ( 5) 00:08:52.796 14596.655 - 14656.233: 98.8537% ( 7) 00:08:52.796 14656.233 - 14715.811: 98.8975% ( 6) 00:08:52.796 14715.811 - 14775.389: 98.9413% ( 6) 00:08:52.796 14775.389 - 14834.967: 98.9778% ( 5) 00:08:52.796 14834.967 - 14894.545: 98.9924% ( 2) 00:08:52.796 14894.545 - 14954.124: 99.0143% ( 3) 00:08:52.796 14954.124 - 15013.702: 99.0362% ( 3) 00:08:52.796 15013.702 - 15073.280: 99.0581% ( 3) 00:08:52.796 15073.280 - 15132.858: 99.0654% ( 1) 00:08:52.796 23354.647 - 23473.804: 99.0727% ( 1) 00:08:52.796 23473.804 - 23592.960: 99.0946% ( 3) 00:08:52.796 23592.960 - 23712.116: 99.1165% ( 3) 00:08:52.796 23712.116 - 23831.273: 99.1457% ( 4) 00:08:52.796 23831.273 - 23950.429: 99.1676% ( 3) 00:08:52.796 23950.429 - 24069.585: 99.1968% ( 4) 00:08:52.796 24069.585 - 24188.742: 99.2261% ( 4) 00:08:52.796 24188.742 - 24307.898: 99.2407% ( 2) 00:08:52.796 24307.898 - 24427.055: 99.2626% ( 3) 00:08:52.796 24427.055 - 24546.211: 99.2918% ( 4) 00:08:52.796 24546.211 - 24665.367: 99.3137% ( 3) 00:08:52.796 24665.367 - 24784.524: 99.3429% ( 4) 00:08:52.796 24784.524 - 24903.680: 99.3721% ( 4) 00:08:52.796 24903.680 - 25022.836: 99.4013% ( 4) 00:08:52.796 25022.836 - 25141.993: 99.4232% ( 3) 00:08:52.796 25141.993 - 25261.149: 99.4524% ( 4) 00:08:52.796 25261.149 - 25380.305: 99.4816% ( 4) 00:08:52.796 25380.305 - 25499.462: 99.5108% ( 4) 00:08:52.796 25499.462 - 25618.618: 99.5327% ( 3) 00:08:52.796 29669.935 - 29789.091: 99.5400% ( 1) 00:08:52.796 29789.091 - 29908.247: 99.5692% ( 4) 00:08:52.796 29908.247 - 30027.404: 99.5984% ( 4) 00:08:52.796 30027.404 - 30146.560: 99.6276% ( 4) 00:08:52.796 30146.560 - 30265.716: 99.6495% ( 3) 00:08:52.796 30265.716 - 30384.873: 99.6714% ( 3) 00:08:52.796 30384.873 - 30504.029: 99.6933% ( 3) 00:08:52.796 30504.029 - 30742.342: 99.7518% ( 8) 00:08:52.796 30742.342 - 30980.655: 99.8102% ( 8) 00:08:52.796 30980.655 - 31218.967: 99.8613% ( 7) 00:08:52.796 31218.967 - 31457.280: 99.9124% ( 7) 00:08:52.796 31457.280 - 31695.593: 99.9708% ( 8) 00:08:52.796 31695.593 - 31933.905: 100.0000% ( 4) 00:08:52.796 00:08:52.796 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:52.796 ============================================================================== 00:08:52.796 Range in us Cumulative IO count 00:08:52.796 7506.851 - 7536.640: 0.0219% ( 3) 00:08:52.796 7536.640 - 7566.429: 0.0876% ( 9) 00:08:52.796 7566.429 - 7596.218: 0.1825% ( 13) 00:08:52.796 7596.218 - 7626.007: 0.3067% ( 17) 00:08:52.796 7626.007 - 7685.585: 0.7301% ( 58) 00:08:52.796 7685.585 - 7745.164: 1.4019% ( 92) 00:08:52.796 7745.164 - 7804.742: 2.4241% ( 140) 00:08:52.796 7804.742 - 7864.320: 3.8697% ( 198) 00:08:52.796 7864.320 - 7923.898: 5.6221% ( 240) 00:08:52.796 7923.898 - 7983.476: 7.7249% ( 288) 00:08:52.796 7983.476 - 8043.055: 10.0248% ( 315) 00:08:52.796 8043.055 - 8102.633: 12.4124% ( 327) 00:08:52.796 8102.633 - 8162.211: 15.0847% ( 366) 00:08:52.796 8162.211 - 8221.789: 17.9761% ( 396) 00:08:52.796 8221.789 - 8281.367: 20.9696% ( 410) 00:08:52.796 8281.367 - 8340.945: 24.1019% ( 429) 00:08:52.796 8340.945 - 8400.524: 27.3949% ( 451) 00:08:52.796 8400.524 - 8460.102: 30.6659% ( 448) 00:08:52.796 8460.102 - 8519.680: 34.0829% ( 468) 00:08:52.796 8519.680 - 8579.258: 37.4854% ( 466) 00:08:52.796 8579.258 - 8638.836: 40.8659% ( 463) 00:08:52.796 8638.836 - 8698.415: 44.2392% ( 462) 00:08:52.796 8698.415 - 8757.993: 47.5029% ( 447) 00:08:52.796 8757.993 - 8817.571: 50.3797% ( 394) 00:08:52.796 8817.571 - 8877.149: 52.9206% ( 348) 00:08:52.796 8877.149 - 8936.727: 55.2497% ( 319) 00:08:52.796 8936.727 - 8996.305: 57.4328% ( 299) 00:08:52.796 8996.305 - 9055.884: 59.3969% ( 269) 00:08:52.796 9055.884 - 9115.462: 61.2515% ( 254) 00:08:52.796 9115.462 - 9175.040: 62.9819% ( 237) 00:08:52.796 9175.040 - 9234.618: 64.6831% ( 233) 00:08:52.796 9234.618 - 9294.196: 66.1872% ( 206) 00:08:52.796 9294.196 - 9353.775: 67.5891% ( 192) 00:08:52.796 9353.775 - 9413.353: 68.9033% ( 180) 00:08:52.796 9413.353 - 9472.931: 70.1884% ( 176) 00:08:52.797 9472.931 - 9532.509: 71.3785% ( 163) 00:08:52.797 9532.509 - 9592.087: 72.4299% ( 144) 00:08:52.797 9592.087 - 9651.665: 73.4448% ( 139) 00:08:52.797 9651.665 - 9711.244: 74.3502% ( 124) 00:08:52.797 9711.244 - 9770.822: 75.2775% ( 127) 00:08:52.797 9770.822 - 9830.400: 76.2120% ( 128) 00:08:52.797 9830.400 - 9889.978: 77.0298% ( 112) 00:08:52.797 9889.978 - 9949.556: 77.8037% ( 106) 00:08:52.797 9949.556 - 10009.135: 78.6288% ( 113) 00:08:52.797 10009.135 - 10068.713: 79.3954% ( 105) 00:08:52.797 10068.713 - 10128.291: 80.1694% ( 106) 00:08:52.797 10128.291 - 10187.869: 80.9287% ( 104) 00:08:52.797 10187.869 - 10247.447: 81.6443% ( 98) 00:08:52.797 10247.447 - 10307.025: 82.3671% ( 99) 00:08:52.797 10307.025 - 10366.604: 83.0023% ( 87) 00:08:52.797 10366.604 - 10426.182: 83.7033% ( 96) 00:08:52.797 10426.182 - 10485.760: 84.4553% ( 103) 00:08:52.797 10485.760 - 10545.338: 85.2585% ( 110) 00:08:52.797 10545.338 - 10604.916: 86.0397% ( 107) 00:08:52.797 10604.916 - 10664.495: 86.7918% ( 103) 00:08:52.797 10664.495 - 10724.073: 87.4708% ( 93) 00:08:52.797 10724.073 - 10783.651: 88.0841% ( 84) 00:08:52.797 10783.651 - 10843.229: 88.7266% ( 88) 00:08:52.797 10843.229 - 10902.807: 89.3034% ( 79) 00:08:52.797 10902.807 - 10962.385: 89.8584% ( 76) 00:08:52.797 10962.385 - 11021.964: 90.3548% ( 68) 00:08:52.797 11021.964 - 11081.542: 90.8148% ( 63) 00:08:52.797 11081.542 - 11141.120: 91.2310% ( 57) 00:08:52.797 11141.120 - 11200.698: 91.6545% ( 58) 00:08:52.797 11200.698 - 11260.276: 92.1072% ( 62) 00:08:52.797 11260.276 - 11319.855: 92.5599% ( 62) 00:08:52.797 11319.855 - 11379.433: 92.9761% ( 57) 00:08:52.797 11379.433 - 11439.011: 93.3703% ( 54) 00:08:52.797 11439.011 - 11498.589: 93.7208% ( 48) 00:08:52.797 11498.589 - 11558.167: 93.9909% ( 37) 00:08:52.797 11558.167 - 11617.745: 94.2538% ( 36) 00:08:52.797 11617.745 - 11677.324: 94.5532% ( 41) 00:08:52.797 11677.324 - 11736.902: 94.8452% ( 40) 00:08:52.797 11736.902 - 11796.480: 95.1154% ( 37) 00:08:52.797 11796.480 - 11856.058: 95.3125% ( 27) 00:08:52.797 11856.058 - 11915.636: 95.5534% ( 33) 00:08:52.797 11915.636 - 11975.215: 95.7506% ( 27) 00:08:52.797 11975.215 - 12034.793: 95.9769% ( 31) 00:08:52.797 12034.793 - 12094.371: 96.1814% ( 28) 00:08:52.797 12094.371 - 12153.949: 96.4004% ( 30) 00:08:52.797 12153.949 - 12213.527: 96.6121% ( 29) 00:08:52.797 12213.527 - 12273.105: 96.8458% ( 32) 00:08:52.797 12273.105 - 12332.684: 97.0356% ( 26) 00:08:52.797 12332.684 - 12392.262: 97.2401% ( 28) 00:08:52.797 12392.262 - 12451.840: 97.4372% ( 27) 00:08:52.797 12451.840 - 12511.418: 97.6124% ( 24) 00:08:52.797 12511.418 - 12570.996: 97.7877% ( 24) 00:08:52.797 12570.996 - 12630.575: 97.9118% ( 17) 00:08:52.797 12630.575 - 12690.153: 97.9994% ( 12) 00:08:52.797 12690.153 - 12749.731: 98.0578% ( 8) 00:08:52.797 12749.731 - 12809.309: 98.1016% ( 6) 00:08:52.797 12809.309 - 12868.887: 98.1235% ( 3) 00:08:52.797 12868.887 - 12928.465: 98.1308% ( 1) 00:08:52.797 13762.560 - 13822.138: 98.1454% ( 2) 00:08:52.797 13822.138 - 13881.716: 98.1746% ( 4) 00:08:52.797 13881.716 - 13941.295: 98.2039% ( 4) 00:08:52.797 13941.295 - 14000.873: 98.2331% ( 4) 00:08:52.797 14000.873 - 14060.451: 98.2769% ( 6) 00:08:52.797 14060.451 - 14120.029: 98.3280% ( 7) 00:08:52.797 14120.029 - 14179.607: 98.3718% ( 6) 00:08:52.797 14179.607 - 14239.185: 98.4229% ( 7) 00:08:52.797 14239.185 - 14298.764: 98.4594% ( 5) 00:08:52.797 14298.764 - 14358.342: 98.5178% ( 8) 00:08:52.797 14358.342 - 14417.920: 98.5616% ( 6) 00:08:52.797 14417.920 - 14477.498: 98.6127% ( 7) 00:08:52.797 14477.498 - 14537.076: 98.6492% ( 5) 00:08:52.797 14537.076 - 14596.655: 98.6930% ( 6) 00:08:52.797 14596.655 - 14656.233: 98.7296% ( 5) 00:08:52.797 14656.233 - 14715.811: 98.7807% ( 7) 00:08:52.797 14715.811 - 14775.389: 98.8245% ( 6) 00:08:52.797 14775.389 - 14834.967: 98.8683% ( 6) 00:08:52.797 14834.967 - 14894.545: 98.8902% ( 3) 00:08:52.797 14894.545 - 14954.124: 98.9048% ( 2) 00:08:52.797 14954.124 - 15013.702: 98.9194% ( 2) 00:08:52.797 15013.702 - 15073.280: 98.9340% ( 2) 00:08:52.797 15073.280 - 15132.858: 98.9559% ( 3) 00:08:52.797 15132.858 - 15192.436: 98.9778% ( 3) 00:08:52.797 15192.436 - 15252.015: 98.9997% ( 3) 00:08:52.797 15252.015 - 15371.171: 99.0435% ( 6) 00:08:52.797 15371.171 - 15490.327: 99.0654% ( 3) 00:08:52.797 21686.458 - 21805.615: 99.0873% ( 3) 00:08:52.797 21805.615 - 21924.771: 99.1238% ( 5) 00:08:52.797 21924.771 - 22043.927: 99.1457% ( 3) 00:08:52.797 22043.927 - 22163.084: 99.1749% ( 4) 00:08:52.797 22163.084 - 22282.240: 99.1968% ( 3) 00:08:52.797 22282.240 - 22401.396: 99.2261% ( 4) 00:08:52.797 22401.396 - 22520.553: 99.2480% ( 3) 00:08:52.797 22520.553 - 22639.709: 99.2699% ( 3) 00:08:52.797 22639.709 - 22758.865: 99.2991% ( 4) 00:08:52.797 22758.865 - 22878.022: 99.3283% ( 4) 00:08:52.797 22878.022 - 22997.178: 99.3575% ( 4) 00:08:52.797 22997.178 - 23116.335: 99.3867% ( 4) 00:08:52.797 23116.335 - 23235.491: 99.4159% ( 4) 00:08:52.797 23235.491 - 23354.647: 99.4378% ( 3) 00:08:52.797 23354.647 - 23473.804: 99.4670% ( 4) 00:08:52.797 23473.804 - 23592.960: 99.4962% ( 4) 00:08:52.797 23592.960 - 23712.116: 99.5254% ( 4) 00:08:52.797 23712.116 - 23831.273: 99.5327% ( 1) 00:08:52.797 27882.589 - 28001.745: 99.5546% ( 3) 00:08:52.797 28001.745 - 28120.902: 99.5765% ( 3) 00:08:52.797 28120.902 - 28240.058: 99.6057% ( 4) 00:08:52.797 28240.058 - 28359.215: 99.6276% ( 3) 00:08:52.797 28359.215 - 28478.371: 99.6495% ( 3) 00:08:52.797 28478.371 - 28597.527: 99.6787% ( 4) 00:08:52.797 28597.527 - 28716.684: 99.7006% ( 3) 00:08:52.797 28716.684 - 28835.840: 99.7298% ( 4) 00:08:52.797 28835.840 - 28954.996: 99.7518% ( 3) 00:08:52.797 28954.996 - 29074.153: 99.7810% ( 4) 00:08:52.797 29074.153 - 29193.309: 99.8102% ( 4) 00:08:52.797 29193.309 - 29312.465: 99.8394% ( 4) 00:08:52.797 29312.465 - 29431.622: 99.8613% ( 3) 00:08:52.797 29431.622 - 29550.778: 99.8905% ( 4) 00:08:52.797 29550.778 - 29669.935: 99.9197% ( 4) 00:08:52.797 29669.935 - 29789.091: 99.9489% ( 4) 00:08:52.797 29789.091 - 29908.247: 99.9781% ( 4) 00:08:52.797 29908.247 - 30027.404: 100.0000% ( 3) 00:08:52.797 00:08:52.797 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:52.797 ============================================================================== 00:08:52.797 Range in us Cumulative IO count 00:08:52.797 7506.851 - 7536.640: 0.0073% ( 1) 00:08:52.797 7536.640 - 7566.429: 0.0438% ( 5) 00:08:52.797 7566.429 - 7596.218: 0.1095% ( 9) 00:08:52.797 7596.218 - 7626.007: 0.2044% ( 13) 00:08:52.797 7626.007 - 7685.585: 0.5476% ( 47) 00:08:52.797 7685.585 - 7745.164: 1.1463% ( 82) 00:08:52.797 7745.164 - 7804.742: 2.0736% ( 127) 00:08:52.797 7804.742 - 7864.320: 3.4828% ( 193) 00:08:52.797 7864.320 - 7923.898: 5.1402% ( 227) 00:08:52.797 7923.898 - 7983.476: 7.1481% ( 275) 00:08:52.797 7983.476 - 8043.055: 9.3458% ( 301) 00:08:52.797 8043.055 - 8102.633: 11.8283% ( 340) 00:08:52.797 8102.633 - 8162.211: 14.4422% ( 358) 00:08:52.797 8162.211 - 8221.789: 17.2459% ( 384) 00:08:52.797 8221.789 - 8281.367: 20.2687% ( 414) 00:08:52.797 8281.367 - 8340.945: 23.3718% ( 425) 00:08:52.797 8340.945 - 8400.524: 26.4968% ( 428) 00:08:52.797 8400.524 - 8460.102: 29.7897% ( 451) 00:08:52.797 8460.102 - 8519.680: 33.1849% ( 465) 00:08:52.797 8519.680 - 8579.258: 36.8064% ( 496) 00:08:52.797 8579.258 - 8638.836: 40.3183% ( 481) 00:08:52.797 8638.836 - 8698.415: 43.8011% ( 477) 00:08:52.797 8698.415 - 8757.993: 46.9845% ( 436) 00:08:52.797 8757.993 - 8817.571: 49.9562% ( 407) 00:08:52.797 8817.571 - 8877.149: 52.5482% ( 355) 00:08:52.797 8877.149 - 8936.727: 54.9869% ( 334) 00:08:52.797 8936.727 - 8996.305: 57.2065% ( 304) 00:08:52.797 8996.305 - 9055.884: 59.1852% ( 271) 00:08:52.797 9055.884 - 9115.462: 60.8864% ( 233) 00:08:52.797 9115.462 - 9175.040: 62.5730% ( 231) 00:08:52.797 9175.040 - 9234.618: 64.2596% ( 231) 00:08:52.797 9234.618 - 9294.196: 65.8002% ( 211) 00:08:52.797 9294.196 - 9353.775: 67.2459% ( 198) 00:08:52.797 9353.775 - 9413.353: 68.5821% ( 183) 00:08:52.797 9413.353 - 9472.931: 69.8890% ( 179) 00:08:52.797 9472.931 - 9532.509: 71.0864% ( 164) 00:08:52.797 9532.509 - 9592.087: 72.1744% ( 149) 00:08:52.797 9592.087 - 9651.665: 73.2550% ( 148) 00:08:52.797 9651.665 - 9711.244: 74.2334% ( 134) 00:08:52.797 9711.244 - 9770.822: 75.2409% ( 138) 00:08:52.797 9770.822 - 9830.400: 76.1828% ( 129) 00:08:52.797 9830.400 - 9889.978: 77.0809% ( 123) 00:08:52.797 9889.978 - 9949.556: 77.9644% ( 121) 00:08:52.797 9949.556 - 10009.135: 78.7967% ( 114) 00:08:52.797 10009.135 - 10068.713: 79.6802% ( 121) 00:08:52.797 10068.713 - 10128.291: 80.4834% ( 110) 00:08:52.797 10128.291 - 10187.869: 81.1989% ( 98) 00:08:52.797 10187.869 - 10247.447: 81.9655% ( 105) 00:08:52.797 10247.447 - 10307.025: 82.6811% ( 98) 00:08:52.797 10307.025 - 10366.604: 83.3820% ( 96) 00:08:52.797 10366.604 - 10426.182: 84.0610% ( 93) 00:08:52.797 10426.182 - 10485.760: 84.8350% ( 106) 00:08:52.797 10485.760 - 10545.338: 85.5943% ( 104) 00:08:52.797 10545.338 - 10604.916: 86.3245% ( 100) 00:08:52.797 10604.916 - 10664.495: 87.0765% ( 103) 00:08:52.797 10664.495 - 10724.073: 87.8067% ( 100) 00:08:52.798 10724.073 - 10783.651: 88.4784% ( 92) 00:08:52.798 10783.651 - 10843.229: 89.1063% ( 86) 00:08:52.798 10843.229 - 10902.807: 89.7196% ( 84) 00:08:52.798 10902.807 - 10962.385: 90.2526% ( 73) 00:08:52.798 10962.385 - 11021.964: 90.6834% ( 59) 00:08:52.798 11021.964 - 11081.542: 91.1799% ( 68) 00:08:52.798 11081.542 - 11141.120: 91.5961% ( 57) 00:08:52.798 11141.120 - 11200.698: 91.9904% ( 54) 00:08:52.798 11200.698 - 11260.276: 92.3700% ( 52) 00:08:52.798 11260.276 - 11319.855: 92.7497% ( 52) 00:08:52.798 11319.855 - 11379.433: 93.1367% ( 53) 00:08:52.798 11379.433 - 11439.011: 93.5164% ( 52) 00:08:52.798 11439.011 - 11498.589: 93.8668% ( 48) 00:08:52.798 11498.589 - 11558.167: 94.2465% ( 52) 00:08:52.798 11558.167 - 11617.745: 94.5532% ( 42) 00:08:52.798 11617.745 - 11677.324: 94.8233% ( 37) 00:08:52.798 11677.324 - 11736.902: 95.0716% ( 34) 00:08:52.798 11736.902 - 11796.480: 95.3344% ( 36) 00:08:52.798 11796.480 - 11856.058: 95.6119% ( 38) 00:08:52.798 11856.058 - 11915.636: 95.8455% ( 32) 00:08:52.798 11915.636 - 11975.215: 96.1084% ( 36) 00:08:52.798 11975.215 - 12034.793: 96.3347% ( 31) 00:08:52.798 12034.793 - 12094.371: 96.5756% ( 33) 00:08:52.798 12094.371 - 12153.949: 96.7801% ( 28) 00:08:52.798 12153.949 - 12213.527: 97.0064% ( 31) 00:08:52.798 12213.527 - 12273.105: 97.2109% ( 28) 00:08:52.798 12273.105 - 12332.684: 97.4080% ( 27) 00:08:52.798 12332.684 - 12392.262: 97.5613% ( 21) 00:08:52.798 12392.262 - 12451.840: 97.7147% ( 21) 00:08:52.798 12451.840 - 12511.418: 97.8534% ( 19) 00:08:52.798 12511.418 - 12570.996: 97.9483% ( 13) 00:08:52.798 12570.996 - 12630.575: 97.9994% ( 7) 00:08:52.798 12630.575 - 12690.153: 98.0286% ( 4) 00:08:52.798 12690.153 - 12749.731: 98.0505% ( 3) 00:08:52.798 12749.731 - 12809.309: 98.0651% ( 2) 00:08:52.798 12809.309 - 12868.887: 98.0870% ( 3) 00:08:52.798 12868.887 - 12928.465: 98.1089% ( 3) 00:08:52.798 12928.465 - 12988.044: 98.1308% ( 3) 00:08:52.798 13583.825 - 13643.404: 98.1381% ( 1) 00:08:52.798 13643.404 - 13702.982: 98.1527% ( 2) 00:08:52.798 13702.982 - 13762.560: 98.1746% ( 3) 00:08:52.798 13762.560 - 13822.138: 98.2039% ( 4) 00:08:52.798 13822.138 - 13881.716: 98.2185% ( 2) 00:08:52.798 13881.716 - 13941.295: 98.2331% ( 2) 00:08:52.798 13941.295 - 14000.873: 98.2769% ( 6) 00:08:52.798 14000.873 - 14060.451: 98.3207% ( 6) 00:08:52.798 14060.451 - 14120.029: 98.3645% ( 6) 00:08:52.798 14120.029 - 14179.607: 98.4083% ( 6) 00:08:52.798 14179.607 - 14239.185: 98.4448% ( 5) 00:08:52.798 14239.185 - 14298.764: 98.4886% ( 6) 00:08:52.798 14298.764 - 14358.342: 98.5324% ( 6) 00:08:52.798 14358.342 - 14417.920: 98.5689% ( 5) 00:08:52.798 14417.920 - 14477.498: 98.6054% ( 5) 00:08:52.798 14477.498 - 14537.076: 98.6419% ( 5) 00:08:52.798 14537.076 - 14596.655: 98.6857% ( 6) 00:08:52.798 14596.655 - 14656.233: 98.7223% ( 5) 00:08:52.798 14656.233 - 14715.811: 98.7588% ( 5) 00:08:52.798 14715.811 - 14775.389: 98.8026% ( 6) 00:08:52.798 14775.389 - 14834.967: 98.8464% ( 6) 00:08:52.798 14834.967 - 14894.545: 98.8902% ( 6) 00:08:52.798 14894.545 - 14954.124: 98.9267% ( 5) 00:08:52.798 14954.124 - 15013.702: 98.9559% ( 4) 00:08:52.798 15013.702 - 15073.280: 98.9705% ( 2) 00:08:52.798 15073.280 - 15132.858: 98.9924% ( 3) 00:08:52.798 15132.858 - 15192.436: 99.0143% ( 3) 00:08:52.798 15192.436 - 15252.015: 99.0362% ( 3) 00:08:52.798 15252.015 - 15371.171: 99.0654% ( 4) 00:08:52.798 19541.644 - 19660.800: 99.0800% ( 2) 00:08:52.798 19660.800 - 19779.956: 99.1019% ( 3) 00:08:52.798 19779.956 - 19899.113: 99.1311% ( 4) 00:08:52.798 19899.113 - 20018.269: 99.1530% ( 3) 00:08:52.798 20018.269 - 20137.425: 99.1749% ( 3) 00:08:52.798 20137.425 - 20256.582: 99.2041% ( 4) 00:08:52.798 20256.582 - 20375.738: 99.2334% ( 4) 00:08:52.798 20375.738 - 20494.895: 99.2553% ( 3) 00:08:52.798 20494.895 - 20614.051: 99.2845% ( 4) 00:08:52.798 20614.051 - 20733.207: 99.3137% ( 4) 00:08:52.798 20733.207 - 20852.364: 99.3429% ( 4) 00:08:52.798 20852.364 - 20971.520: 99.3721% ( 4) 00:08:52.798 20971.520 - 21090.676: 99.3940% ( 3) 00:08:52.798 21090.676 - 21209.833: 99.4232% ( 4) 00:08:52.798 21209.833 - 21328.989: 99.4524% ( 4) 00:08:52.798 21328.989 - 21448.145: 99.4743% ( 3) 00:08:52.798 21448.145 - 21567.302: 99.5035% ( 4) 00:08:52.798 21567.302 - 21686.458: 99.5327% ( 4) 00:08:52.798 25737.775 - 25856.931: 99.5473% ( 2) 00:08:52.798 25856.931 - 25976.087: 99.5692% ( 3) 00:08:52.798 25976.087 - 26095.244: 99.5984% ( 4) 00:08:52.798 26095.244 - 26214.400: 99.6276% ( 4) 00:08:52.798 26214.400 - 26333.556: 99.6495% ( 3) 00:08:52.798 26333.556 - 26452.713: 99.6787% ( 4) 00:08:52.798 26452.713 - 26571.869: 99.7079% ( 4) 00:08:52.798 26571.869 - 26691.025: 99.7298% ( 3) 00:08:52.798 26691.025 - 26810.182: 99.7591% ( 4) 00:08:52.798 26810.182 - 26929.338: 99.7810% ( 3) 00:08:52.798 26929.338 - 27048.495: 99.8029% ( 3) 00:08:52.798 27048.495 - 27167.651: 99.8321% ( 4) 00:08:52.798 27167.651 - 27286.807: 99.8613% ( 4) 00:08:52.798 27286.807 - 27405.964: 99.8832% ( 3) 00:08:52.798 27405.964 - 27525.120: 99.9124% ( 4) 00:08:52.798 27525.120 - 27644.276: 99.9416% ( 4) 00:08:52.798 27644.276 - 27763.433: 99.9708% ( 4) 00:08:52.798 27763.433 - 27882.589: 100.0000% ( 4) 00:08:52.798 00:08:52.798 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:52.798 ============================================================================== 00:08:52.798 Range in us Cumulative IO count 00:08:52.798 7536.640 - 7566.429: 0.0292% ( 4) 00:08:52.798 7566.429 - 7596.218: 0.0730% ( 6) 00:08:52.798 7596.218 - 7626.007: 0.1387% ( 9) 00:08:52.798 7626.007 - 7685.585: 0.4381% ( 41) 00:08:52.798 7685.585 - 7745.164: 0.9711% ( 73) 00:08:52.798 7745.164 - 7804.742: 1.8911% ( 126) 00:08:52.798 7804.742 - 7864.320: 3.1834% ( 177) 00:08:52.798 7864.320 - 7923.898: 4.9211% ( 238) 00:08:52.798 7923.898 - 7983.476: 6.8998% ( 271) 00:08:52.798 7983.476 - 8043.055: 9.2655% ( 324) 00:08:52.798 8043.055 - 8102.633: 11.6676% ( 329) 00:08:52.798 8102.633 - 8162.211: 14.3911% ( 373) 00:08:52.798 8162.211 - 8221.789: 17.2167% ( 387) 00:08:52.798 8221.789 - 8281.367: 20.1884% ( 407) 00:08:52.798 8281.367 - 8340.945: 23.2477% ( 419) 00:08:52.798 8340.945 - 8400.524: 26.3946% ( 431) 00:08:52.798 8400.524 - 8460.102: 29.6729% ( 449) 00:08:52.798 8460.102 - 8519.680: 33.1046% ( 470) 00:08:52.798 8519.680 - 8579.258: 36.6822% ( 490) 00:08:52.798 8579.258 - 8638.836: 40.2161% ( 484) 00:08:52.798 8638.836 - 8698.415: 43.7062% ( 478) 00:08:52.798 8698.415 - 8757.993: 46.8896% ( 436) 00:08:52.798 8757.993 - 8817.571: 49.8102% ( 400) 00:08:52.798 8817.571 - 8877.149: 52.4825% ( 366) 00:08:52.798 8877.149 - 8936.727: 54.9284% ( 335) 00:08:52.798 8936.727 - 8996.305: 57.1189% ( 300) 00:08:52.798 8996.305 - 9055.884: 58.9515% ( 251) 00:08:52.798 9055.884 - 9115.462: 60.7769% ( 250) 00:08:52.798 9115.462 - 9175.040: 62.5000% ( 236) 00:08:52.798 9175.040 - 9234.618: 64.0552% ( 213) 00:08:52.798 9234.618 - 9294.196: 65.6177% ( 214) 00:08:52.798 9294.196 - 9353.775: 66.9977% ( 189) 00:08:52.798 9353.775 - 9413.353: 68.3484% ( 185) 00:08:52.798 9413.353 - 9472.931: 69.7284% ( 189) 00:08:52.798 9472.931 - 9532.509: 70.9331% ( 165) 00:08:52.798 9532.509 - 9592.087: 72.1305% ( 164) 00:08:52.798 9592.087 - 9651.665: 73.2039% ( 147) 00:08:52.798 9651.665 - 9711.244: 74.2480% ( 143) 00:08:52.798 9711.244 - 9770.822: 75.2848% ( 142) 00:08:52.798 9770.822 - 9830.400: 76.3435% ( 145) 00:08:52.798 9830.400 - 9889.978: 77.3145% ( 133) 00:08:52.798 9889.978 - 9949.556: 78.3367% ( 140) 00:08:52.798 9949.556 - 10009.135: 79.2640% ( 127) 00:08:52.798 10009.135 - 10068.713: 80.0964% ( 114) 00:08:52.798 10068.713 - 10128.291: 80.8922% ( 109) 00:08:52.798 10128.291 - 10187.869: 81.7100% ( 112) 00:08:52.798 10187.869 - 10247.447: 82.4912% ( 107) 00:08:52.798 10247.447 - 10307.025: 83.2433% ( 103) 00:08:52.798 10307.025 - 10366.604: 83.9807% ( 101) 00:08:52.798 10366.604 - 10426.182: 84.6379% ( 90) 00:08:52.798 10426.182 - 10485.760: 85.3315% ( 95) 00:08:52.798 10485.760 - 10545.338: 86.0032% ( 92) 00:08:52.798 10545.338 - 10604.916: 86.6019% ( 82) 00:08:52.798 10604.916 - 10664.495: 87.1714% ( 78) 00:08:52.798 10664.495 - 10724.073: 87.7190% ( 75) 00:08:52.798 10724.073 - 10783.651: 88.1936% ( 65) 00:08:52.798 10783.651 - 10843.229: 88.6244% ( 59) 00:08:52.798 10843.229 - 10902.807: 89.0990% ( 65) 00:08:52.798 10902.807 - 10962.385: 89.6028% ( 69) 00:08:52.798 10962.385 - 11021.964: 90.1066% ( 69) 00:08:52.798 11021.964 - 11081.542: 90.5958% ( 67) 00:08:52.798 11081.542 - 11141.120: 91.0850% ( 67) 00:08:52.798 11141.120 - 11200.698: 91.5158% ( 59) 00:08:52.798 11200.698 - 11260.276: 91.9539% ( 60) 00:08:52.798 11260.276 - 11319.855: 92.3554% ( 55) 00:08:52.798 11319.855 - 11379.433: 92.8154% ( 63) 00:08:52.798 11379.433 - 11439.011: 93.2243% ( 56) 00:08:52.798 11439.011 - 11498.589: 93.6186% ( 54) 00:08:52.798 11498.589 - 11558.167: 93.9982% ( 52) 00:08:52.798 11558.167 - 11617.745: 94.3925% ( 54) 00:08:52.798 11617.745 - 11677.324: 94.7284% ( 46) 00:08:52.798 11677.324 - 11736.902: 95.0350% ( 42) 00:08:52.798 11736.902 - 11796.480: 95.3563% ( 44) 00:08:52.798 11796.480 - 11856.058: 95.6338% ( 38) 00:08:52.798 11856.058 - 11915.636: 95.9331% ( 41) 00:08:52.799 11915.636 - 11975.215: 96.1887% ( 35) 00:08:52.799 11975.215 - 12034.793: 96.4442% ( 35) 00:08:52.799 12034.793 - 12094.371: 96.6706% ( 31) 00:08:52.799 12094.371 - 12153.949: 96.9115% ( 33) 00:08:52.799 12153.949 - 12213.527: 97.1305% ( 30) 00:08:52.799 12213.527 - 12273.105: 97.3350% ( 28) 00:08:52.799 12273.105 - 12332.684: 97.5248% ( 26) 00:08:52.799 12332.684 - 12392.262: 97.6636% ( 19) 00:08:52.799 12392.262 - 12451.840: 97.7658% ( 14) 00:08:52.799 12451.840 - 12511.418: 97.8753% ( 15) 00:08:52.799 12511.418 - 12570.996: 97.9775% ( 14) 00:08:52.799 12570.996 - 12630.575: 98.0432% ( 9) 00:08:52.799 12630.575 - 12690.153: 98.0870% ( 6) 00:08:52.799 12690.153 - 12749.731: 98.1235% ( 5) 00:08:52.799 12749.731 - 12809.309: 98.1308% ( 1) 00:08:52.799 13702.982 - 13762.560: 98.1381% ( 1) 00:08:52.799 13762.560 - 13822.138: 98.1746% ( 5) 00:08:52.799 13822.138 - 13881.716: 98.2185% ( 6) 00:08:52.799 13881.716 - 13941.295: 98.2623% ( 6) 00:08:52.799 13941.295 - 14000.873: 98.2988% ( 5) 00:08:52.799 14000.873 - 14060.451: 98.3353% ( 5) 00:08:52.799 14060.451 - 14120.029: 98.3864% ( 7) 00:08:52.799 14120.029 - 14179.607: 98.4302% ( 6) 00:08:52.799 14179.607 - 14239.185: 98.4740% ( 6) 00:08:52.799 14239.185 - 14298.764: 98.5105% ( 5) 00:08:52.799 14298.764 - 14358.342: 98.5397% ( 4) 00:08:52.799 14358.342 - 14417.920: 98.5689% ( 4) 00:08:52.799 14417.920 - 14477.498: 98.6127% ( 6) 00:08:52.799 14477.498 - 14537.076: 98.6492% ( 5) 00:08:52.799 14537.076 - 14596.655: 98.6930% ( 6) 00:08:52.799 14596.655 - 14656.233: 98.7369% ( 6) 00:08:52.799 14656.233 - 14715.811: 98.7734% ( 5) 00:08:52.799 14715.811 - 14775.389: 98.8172% ( 6) 00:08:52.799 14775.389 - 14834.967: 98.8610% ( 6) 00:08:52.799 14834.967 - 14894.545: 98.8975% ( 5) 00:08:52.799 14894.545 - 14954.124: 98.9413% ( 6) 00:08:52.799 14954.124 - 15013.702: 98.9851% ( 6) 00:08:52.799 15013.702 - 15073.280: 99.0216% ( 5) 00:08:52.799 15073.280 - 15132.858: 99.0435% ( 3) 00:08:52.799 15132.858 - 15192.436: 99.0654% ( 3) 00:08:52.799 17396.829 - 17515.985: 99.0873% ( 3) 00:08:52.799 17515.985 - 17635.142: 99.1165% ( 4) 00:08:52.799 17635.142 - 17754.298: 99.1457% ( 4) 00:08:52.799 17754.298 - 17873.455: 99.1676% ( 3) 00:08:52.799 17873.455 - 17992.611: 99.1895% ( 3) 00:08:52.799 17992.611 - 18111.767: 99.2188% ( 4) 00:08:52.799 18111.767 - 18230.924: 99.2480% ( 4) 00:08:52.799 18230.924 - 18350.080: 99.2772% ( 4) 00:08:52.799 18350.080 - 18469.236: 99.2991% ( 3) 00:08:52.799 18469.236 - 18588.393: 99.3283% ( 4) 00:08:52.799 18588.393 - 18707.549: 99.3575% ( 4) 00:08:52.799 18707.549 - 18826.705: 99.3867% ( 4) 00:08:52.799 18826.705 - 18945.862: 99.4086% ( 3) 00:08:52.799 18945.862 - 19065.018: 99.4305% ( 3) 00:08:52.799 19065.018 - 19184.175: 99.4597% ( 4) 00:08:52.799 19184.175 - 19303.331: 99.4889% ( 4) 00:08:52.799 19303.331 - 19422.487: 99.5108% ( 3) 00:08:52.799 19422.487 - 19541.644: 99.5327% ( 3) 00:08:52.799 23592.960 - 23712.116: 99.5546% ( 3) 00:08:52.799 23712.116 - 23831.273: 99.5765% ( 3) 00:08:52.799 23831.273 - 23950.429: 99.6057% ( 4) 00:08:52.799 23950.429 - 24069.585: 99.6276% ( 3) 00:08:52.799 24069.585 - 24188.742: 99.6495% ( 3) 00:08:52.799 24188.742 - 24307.898: 99.6787% ( 4) 00:08:52.799 24307.898 - 24427.055: 99.7079% ( 4) 00:08:52.799 24427.055 - 24546.211: 99.7371% ( 4) 00:08:52.799 24546.211 - 24665.367: 99.7518% ( 2) 00:08:52.799 24665.367 - 24784.524: 99.7810% ( 4) 00:08:52.799 24784.524 - 24903.680: 99.7956% ( 2) 00:08:52.799 24903.680 - 25022.836: 99.8175% ( 3) 00:08:52.799 25022.836 - 25141.993: 99.8467% ( 4) 00:08:52.799 25141.993 - 25261.149: 99.8759% ( 4) 00:08:52.799 25261.149 - 25380.305: 99.9051% ( 4) 00:08:52.799 25380.305 - 25499.462: 99.9343% ( 4) 00:08:52.799 25499.462 - 25618.618: 99.9635% ( 4) 00:08:52.799 25618.618 - 25737.775: 99.9781% ( 2) 00:08:52.799 25737.775 - 25856.931: 100.0000% ( 3) 00:08:52.799 00:08:52.799 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:52.799 ============================================================================== 00:08:52.799 Range in us Cumulative IO count 00:08:52.799 7536.640 - 7566.429: 0.0219% ( 3) 00:08:52.799 7566.429 - 7596.218: 0.0584% ( 5) 00:08:52.799 7596.218 - 7626.007: 0.1460% ( 12) 00:08:52.799 7626.007 - 7685.585: 0.4016% ( 35) 00:08:52.799 7685.585 - 7745.164: 1.0368% ( 87) 00:08:52.799 7745.164 - 7804.742: 2.0079% ( 133) 00:08:52.799 7804.742 - 7864.320: 3.3148% ( 179) 00:08:52.799 7864.320 - 7923.898: 4.8846% ( 215) 00:08:52.799 7923.898 - 7983.476: 6.8925% ( 275) 00:08:52.799 7983.476 - 8043.055: 9.1852% ( 314) 00:08:52.799 8043.055 - 8102.633: 11.6165% ( 333) 00:08:52.799 8102.633 - 8162.211: 14.2377% ( 359) 00:08:52.799 8162.211 - 8221.789: 17.0269% ( 382) 00:08:52.799 8221.789 - 8281.367: 20.1300% ( 425) 00:08:52.799 8281.367 - 8340.945: 23.3207% ( 437) 00:08:52.799 8340.945 - 8400.524: 26.5333% ( 440) 00:08:52.799 8400.524 - 8460.102: 29.8408% ( 453) 00:08:52.799 8460.102 - 8519.680: 33.4039% ( 488) 00:08:52.799 8519.680 - 8579.258: 36.9232% ( 482) 00:08:52.799 8579.258 - 8638.836: 40.3987% ( 476) 00:08:52.799 8638.836 - 8698.415: 43.8084% ( 467) 00:08:52.799 8698.415 - 8757.993: 47.0575% ( 445) 00:08:52.799 8757.993 - 8817.571: 50.0219% ( 406) 00:08:52.799 8817.571 - 8877.149: 52.6066% ( 354) 00:08:52.799 8877.149 - 8936.727: 54.9723% ( 324) 00:08:52.799 8936.727 - 8996.305: 57.1262% ( 295) 00:08:52.799 8996.305 - 9055.884: 59.0464% ( 263) 00:08:52.799 9055.884 - 9115.462: 60.9813% ( 265) 00:08:52.799 9115.462 - 9175.040: 62.7555% ( 243) 00:08:52.799 9175.040 - 9234.618: 64.4276% ( 229) 00:08:52.799 9234.618 - 9294.196: 66.0266% ( 219) 00:08:52.799 9294.196 - 9353.775: 67.4577% ( 196) 00:08:52.799 9353.775 - 9413.353: 68.8230% ( 187) 00:08:52.799 9413.353 - 9472.931: 70.1738% ( 185) 00:08:52.799 9472.931 - 9532.509: 71.4004% ( 168) 00:08:52.799 9532.509 - 9592.087: 72.5175% ( 153) 00:08:52.799 9592.087 - 9651.665: 73.5689% ( 144) 00:08:52.799 9651.665 - 9711.244: 74.5984% ( 141) 00:08:52.799 9711.244 - 9770.822: 75.5695% ( 133) 00:08:52.799 9770.822 - 9830.400: 76.5406% ( 133) 00:08:52.799 9830.400 - 9889.978: 77.4752% ( 128) 00:08:52.799 9889.978 - 9949.556: 78.4317% ( 131) 00:08:52.799 9949.556 - 10009.135: 79.3443% ( 125) 00:08:52.799 10009.135 - 10068.713: 80.1986% ( 117) 00:08:52.799 10068.713 - 10128.291: 80.9579% ( 104) 00:08:52.799 10128.291 - 10187.869: 81.7830% ( 113) 00:08:52.799 10187.869 - 10247.447: 82.5716% ( 108) 00:08:52.799 10247.447 - 10307.025: 83.3820% ( 111) 00:08:52.799 10307.025 - 10366.604: 84.0829% ( 96) 00:08:52.799 10366.604 - 10426.182: 84.7036% ( 85) 00:08:52.799 10426.182 - 10485.760: 85.2804% ( 79) 00:08:52.799 10485.760 - 10545.338: 85.8499% ( 78) 00:08:52.799 10545.338 - 10604.916: 86.3756% ( 72) 00:08:52.799 10604.916 - 10664.495: 86.8356% ( 63) 00:08:52.799 10664.495 - 10724.073: 87.3467% ( 70) 00:08:52.799 10724.073 - 10783.651: 87.8578% ( 70) 00:08:52.799 10783.651 - 10843.229: 88.3397% ( 66) 00:08:52.799 10843.229 - 10902.807: 88.8289% ( 67) 00:08:52.799 10902.807 - 10962.385: 89.3034% ( 65) 00:08:52.799 10962.385 - 11021.964: 89.7488% ( 61) 00:08:52.799 11021.964 - 11081.542: 90.1869% ( 60) 00:08:52.799 11081.542 - 11141.120: 90.6104% ( 58) 00:08:52.799 11141.120 - 11200.698: 91.0777% ( 64) 00:08:52.799 11200.698 - 11260.276: 91.5596% ( 66) 00:08:52.799 11260.276 - 11319.855: 91.9831% ( 58) 00:08:52.799 11319.855 - 11379.433: 92.3627% ( 52) 00:08:52.799 11379.433 - 11439.011: 92.7497% ( 53) 00:08:52.799 11439.011 - 11498.589: 93.1732% ( 58) 00:08:52.799 11498.589 - 11558.167: 93.5602% ( 53) 00:08:52.800 11558.167 - 11617.745: 93.9763% ( 57) 00:08:52.800 11617.745 - 11677.324: 94.3341% ( 49) 00:08:52.800 11677.324 - 11736.902: 94.6919% ( 49) 00:08:52.800 11736.902 - 11796.480: 95.0350% ( 47) 00:08:52.800 11796.480 - 11856.058: 95.3855% ( 48) 00:08:52.800 11856.058 - 11915.636: 95.7141% ( 45) 00:08:52.800 11915.636 - 11975.215: 96.0353% ( 44) 00:08:52.800 11975.215 - 12034.793: 96.3128% ( 38) 00:08:52.800 12034.793 - 12094.371: 96.6048% ( 40) 00:08:52.800 12094.371 - 12153.949: 96.8750% ( 37) 00:08:52.800 12153.949 - 12213.527: 97.0867% ( 29) 00:08:52.800 12213.527 - 12273.105: 97.2693% ( 25) 00:08:52.800 12273.105 - 12332.684: 97.4226% ( 21) 00:08:52.800 12332.684 - 12392.262: 97.5540% ( 18) 00:08:52.800 12392.262 - 12451.840: 97.6709% ( 16) 00:08:52.800 12451.840 - 12511.418: 97.7731% ( 14) 00:08:52.800 12511.418 - 12570.996: 97.8753% ( 14) 00:08:52.800 12570.996 - 12630.575: 97.9848% ( 15) 00:08:52.800 12630.575 - 12690.153: 98.0797% ( 13) 00:08:52.800 12690.153 - 12749.731: 98.1308% ( 7) 00:08:52.800 13762.560 - 13822.138: 98.1381% ( 1) 00:08:52.800 13822.138 - 13881.716: 98.1746% ( 5) 00:08:52.800 13881.716 - 13941.295: 98.2185% ( 6) 00:08:52.800 13941.295 - 14000.873: 98.2696% ( 7) 00:08:52.800 14000.873 - 14060.451: 98.3280% ( 8) 00:08:52.800 14060.451 - 14120.029: 98.3791% ( 7) 00:08:52.800 14120.029 - 14179.607: 98.4156% ( 5) 00:08:52.800 14179.607 - 14239.185: 98.4521% ( 5) 00:08:52.800 14239.185 - 14298.764: 98.4959% ( 6) 00:08:52.800 14298.764 - 14358.342: 98.5397% ( 6) 00:08:52.800 14358.342 - 14417.920: 98.5835% ( 6) 00:08:52.800 14417.920 - 14477.498: 98.6200% ( 5) 00:08:52.800 14477.498 - 14537.076: 98.6638% ( 6) 00:08:52.800 14537.076 - 14596.655: 98.7077% ( 6) 00:08:52.800 14596.655 - 14656.233: 98.7515% ( 6) 00:08:52.800 14656.233 - 14715.811: 98.8026% ( 7) 00:08:52.800 14715.811 - 14775.389: 98.8464% ( 6) 00:08:52.800 14775.389 - 14834.967: 98.8829% ( 5) 00:08:52.800 14834.967 - 14894.545: 98.9340% ( 7) 00:08:52.800 14894.545 - 14954.124: 98.9778% ( 6) 00:08:52.800 14954.124 - 15013.702: 99.0216% ( 6) 00:08:52.800 15013.702 - 15073.280: 99.0508% ( 4) 00:08:52.800 15073.280 - 15132.858: 99.0654% ( 2) 00:08:52.800 15252.015 - 15371.171: 99.0727% ( 1) 00:08:52.800 15371.171 - 15490.327: 99.0946% ( 3) 00:08:52.800 15490.327 - 15609.484: 99.1311% ( 5) 00:08:52.800 15609.484 - 15728.640: 99.1603% ( 4) 00:08:52.800 15728.640 - 15847.796: 99.1822% ( 3) 00:08:52.800 15847.796 - 15966.953: 99.2041% ( 3) 00:08:52.800 15966.953 - 16086.109: 99.2334% ( 4) 00:08:52.800 16086.109 - 16205.265: 99.2626% ( 4) 00:08:52.800 16205.265 - 16324.422: 99.2845% ( 3) 00:08:52.800 16324.422 - 16443.578: 99.3064% ( 3) 00:08:52.800 16443.578 - 16562.735: 99.3356% ( 4) 00:08:52.800 16562.735 - 16681.891: 99.3648% ( 4) 00:08:52.800 16681.891 - 16801.047: 99.3940% ( 4) 00:08:52.800 16801.047 - 16920.204: 99.4159% ( 3) 00:08:52.800 16920.204 - 17039.360: 99.4451% ( 4) 00:08:52.800 17039.360 - 17158.516: 99.4743% ( 4) 00:08:52.800 17158.516 - 17277.673: 99.5035% ( 4) 00:08:52.800 17277.673 - 17396.829: 99.5327% ( 4) 00:08:52.800 21567.302 - 21686.458: 99.5619% ( 4) 00:08:52.800 21686.458 - 21805.615: 99.5838% ( 3) 00:08:52.800 21805.615 - 21924.771: 99.6130% ( 4) 00:08:52.800 21924.771 - 22043.927: 99.6422% ( 4) 00:08:52.800 22043.927 - 22163.084: 99.6714% ( 4) 00:08:52.800 22163.084 - 22282.240: 99.6933% ( 3) 00:08:52.800 22282.240 - 22401.396: 99.7225% ( 4) 00:08:52.800 22401.396 - 22520.553: 99.7518% ( 4) 00:08:52.800 22520.553 - 22639.709: 99.7737% ( 3) 00:08:52.800 22639.709 - 22758.865: 99.8029% ( 4) 00:08:52.800 22758.865 - 22878.022: 99.8321% ( 4) 00:08:52.800 22878.022 - 22997.178: 99.8613% ( 4) 00:08:52.800 22997.178 - 23116.335: 99.8905% ( 4) 00:08:52.800 23116.335 - 23235.491: 99.9270% ( 5) 00:08:52.800 23235.491 - 23354.647: 99.9562% ( 4) 00:08:52.800 23354.647 - 23473.804: 99.9854% ( 4) 00:08:52.800 23473.804 - 23592.960: 100.0000% ( 2) 00:08:52.800 00:08:52.800 13:04:44 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:54.175 Initializing NVMe Controllers 00:08:54.175 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:54.175 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:54.175 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:54.175 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:54.175 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:54.175 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:54.175 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:54.175 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:54.175 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:54.175 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:54.175 Initialization complete. Launching workers. 00:08:54.175 ======================================================== 00:08:54.175 Latency(us) 00:08:54.175 Device Information : IOPS MiB/s Average min max 00:08:54.175 PCIE (0000:00:10.0) NSID 1 from core 0: 9638.10 112.95 13302.58 9000.01 45797.05 00:08:54.175 PCIE (0000:00:11.0) NSID 1 from core 0: 9638.10 112.95 13262.61 9221.12 42334.68 00:08:54.175 PCIE (0000:00:13.0) NSID 1 from core 0: 9638.10 112.95 13221.37 9511.71 39630.71 00:08:54.175 PCIE (0000:00:12.0) NSID 1 from core 0: 9638.10 112.95 13179.76 9387.35 36201.96 00:08:54.175 PCIE (0000:00:12.0) NSID 2 from core 0: 9638.10 112.95 13138.97 9159.36 32765.98 00:08:54.175 PCIE (0000:00:12.0) NSID 3 from core 0: 9638.10 112.95 13100.61 9240.07 29683.99 00:08:54.175 ======================================================== 00:08:54.175 Total : 57828.61 677.68 13200.98 9000.01 45797.05 00:08:54.175 00:08:54.175 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:54.175 ================================================================================= 00:08:54.175 1.00000% : 9770.822us 00:08:54.175 10.00000% : 10783.651us 00:08:54.175 25.00000% : 11677.324us 00:08:54.175 50.00000% : 12868.887us 00:08:54.175 75.00000% : 14120.029us 00:08:54.175 90.00000% : 15490.327us 00:08:54.175 95.00000% : 16562.735us 00:08:54.175 98.00000% : 18588.393us 00:08:54.175 99.00000% : 34793.658us 00:08:54.175 99.50000% : 43849.542us 00:08:54.175 99.90000% : 45517.731us 00:08:54.175 99.99000% : 45994.356us 00:08:54.175 99.99900% : 45994.356us 00:08:54.175 99.99990% : 45994.356us 00:08:54.175 99.99999% : 45994.356us 00:08:54.175 00:08:54.175 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:54.175 ================================================================================= 00:08:54.175 1.00000% : 9770.822us 00:08:54.175 10.00000% : 10724.073us 00:08:54.175 25.00000% : 11677.324us 00:08:54.175 50.00000% : 12988.044us 00:08:54.175 75.00000% : 14239.185us 00:08:54.175 90.00000% : 15490.327us 00:08:54.175 95.00000% : 16205.265us 00:08:54.175 98.00000% : 17039.360us 00:08:54.175 99.00000% : 32648.844us 00:08:54.175 99.50000% : 40513.164us 00:08:54.175 99.90000% : 42181.353us 00:08:54.175 99.99000% : 42419.665us 00:08:54.175 99.99900% : 42419.665us 00:08:54.175 99.99990% : 42419.665us 00:08:54.175 99.99999% : 42419.665us 00:08:54.175 00:08:54.175 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:54.175 ================================================================================= 00:08:54.175 1.00000% : 9830.400us 00:08:54.175 10.00000% : 10783.651us 00:08:54.175 25.00000% : 11677.324us 00:08:54.175 50.00000% : 12928.465us 00:08:54.175 75.00000% : 14179.607us 00:08:54.175 90.00000% : 15490.327us 00:08:54.175 95.00000% : 16324.422us 00:08:54.175 98.00000% : 17039.360us 00:08:54.175 99.00000% : 30384.873us 00:08:54.175 99.50000% : 37653.411us 00:08:54.175 99.90000% : 39321.600us 00:08:54.175 99.99000% : 39798.225us 00:08:54.175 99.99900% : 39798.225us 00:08:54.175 99.99990% : 39798.225us 00:08:54.175 99.99999% : 39798.225us 00:08:54.175 00:08:54.175 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:54.175 ================================================================================= 00:08:54.175 1.00000% : 9770.822us 00:08:54.175 10.00000% : 10724.073us 00:08:54.175 25.00000% : 11617.745us 00:08:54.175 50.00000% : 12928.465us 00:08:54.175 75.00000% : 14239.185us 00:08:54.175 90.00000% : 15490.327us 00:08:54.175 95.00000% : 16324.422us 00:08:54.175 98.00000% : 17158.516us 00:08:54.175 99.00000% : 27167.651us 00:08:54.175 99.50000% : 34317.033us 00:08:54.175 99.90000% : 35985.222us 00:08:54.175 99.99000% : 36223.535us 00:08:54.175 99.99900% : 36223.535us 00:08:54.175 99.99990% : 36223.535us 00:08:54.175 99.99999% : 36223.535us 00:08:54.175 00:08:54.175 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:54.175 ================================================================================= 00:08:54.175 1.00000% : 9711.244us 00:08:54.175 10.00000% : 10783.651us 00:08:54.175 25.00000% : 11677.324us 00:08:54.175 50.00000% : 12928.465us 00:08:54.175 75.00000% : 14179.607us 00:08:54.175 90.00000% : 15490.327us 00:08:54.175 95.00000% : 16443.578us 00:08:54.175 98.00000% : 17158.516us 00:08:54.176 99.00000% : 23831.273us 00:08:54.176 99.50000% : 30980.655us 00:08:54.176 99.90000% : 32410.531us 00:08:54.176 99.99000% : 32887.156us 00:08:54.176 99.99900% : 32887.156us 00:08:54.176 99.99990% : 32887.156us 00:08:54.176 99.99999% : 32887.156us 00:08:54.176 00:08:54.176 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:54.176 ================================================================================= 00:08:54.176 1.00000% : 9651.665us 00:08:54.176 10.00000% : 10902.807us 00:08:54.176 25.00000% : 11677.324us 00:08:54.176 50.00000% : 12928.465us 00:08:54.176 75.00000% : 14120.029us 00:08:54.176 90.00000% : 15371.171us 00:08:54.176 95.00000% : 16443.578us 00:08:54.176 98.00000% : 18588.393us 00:08:54.176 99.00000% : 20852.364us 00:08:54.176 99.50000% : 27882.589us 00:08:54.176 99.90000% : 29431.622us 00:08:54.176 99.99000% : 29789.091us 00:08:54.176 99.99900% : 29789.091us 00:08:54.176 99.99990% : 29789.091us 00:08:54.176 99.99999% : 29789.091us 00:08:54.176 00:08:54.176 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:54.176 ============================================================================== 00:08:54.176 Range in us Cumulative IO count 00:08:54.176 8996.305 - 9055.884: 0.0931% ( 9) 00:08:54.176 9055.884 - 9115.462: 0.1035% ( 1) 00:08:54.176 9115.462 - 9175.040: 0.1242% ( 2) 00:08:54.176 9175.040 - 9234.618: 0.1552% ( 3) 00:08:54.176 9234.618 - 9294.196: 0.1759% ( 2) 00:08:54.176 9294.196 - 9353.775: 0.2483% ( 7) 00:08:54.176 9353.775 - 9413.353: 0.4346% ( 18) 00:08:54.176 9413.353 - 9472.931: 0.5898% ( 15) 00:08:54.176 9472.931 - 9532.509: 0.6726% ( 8) 00:08:54.176 9532.509 - 9592.087: 0.7657% ( 9) 00:08:54.176 9592.087 - 9651.665: 0.8485% ( 8) 00:08:54.176 9651.665 - 9711.244: 0.9830% ( 13) 00:08:54.176 9711.244 - 9770.822: 1.1589% ( 17) 00:08:54.176 9770.822 - 9830.400: 1.5522% ( 38) 00:08:54.176 9830.400 - 9889.978: 1.8936% ( 33) 00:08:54.176 9889.978 - 9949.556: 2.2765% ( 37) 00:08:54.176 9949.556 - 10009.135: 2.6387% ( 35) 00:08:54.176 10009.135 - 10068.713: 3.0940% ( 44) 00:08:54.176 10068.713 - 10128.291: 3.4872% ( 38) 00:08:54.176 10128.291 - 10187.869: 3.9114% ( 41) 00:08:54.176 10187.869 - 10247.447: 4.3978% ( 47) 00:08:54.176 10247.447 - 10307.025: 4.9462% ( 53) 00:08:54.176 10307.025 - 10366.604: 5.6188% ( 65) 00:08:54.176 10366.604 - 10426.182: 6.1879% ( 55) 00:08:54.176 10426.182 - 10485.760: 7.0675% ( 85) 00:08:54.176 10485.760 - 10545.338: 7.8746% ( 78) 00:08:54.176 10545.338 - 10604.916: 8.5782% ( 68) 00:08:54.176 10604.916 - 10664.495: 9.2819% ( 68) 00:08:54.176 10664.495 - 10724.073: 9.8820% ( 58) 00:08:54.176 10724.073 - 10783.651: 10.5339% ( 63) 00:08:54.176 10783.651 - 10843.229: 11.2790% ( 72) 00:08:54.176 10843.229 - 10902.807: 12.1275% ( 82) 00:08:54.176 10902.807 - 10962.385: 12.9656% ( 81) 00:08:54.176 10962.385 - 11021.964: 13.7831% ( 79) 00:08:54.176 11021.964 - 11081.542: 14.5902% ( 78) 00:08:54.176 11081.542 - 11141.120: 15.4077% ( 79) 00:08:54.176 11141.120 - 11200.698: 16.3493% ( 91) 00:08:54.176 11200.698 - 11260.276: 17.3220% ( 94) 00:08:54.176 11260.276 - 11319.855: 18.2430% ( 89) 00:08:54.176 11319.855 - 11379.433: 19.2984% ( 102) 00:08:54.176 11379.433 - 11439.011: 20.4263% ( 109) 00:08:54.176 11439.011 - 11498.589: 21.4818% ( 102) 00:08:54.176 11498.589 - 11558.167: 22.7028% ( 118) 00:08:54.176 11558.167 - 11617.745: 23.9652% ( 122) 00:08:54.176 11617.745 - 11677.324: 25.3001% ( 129) 00:08:54.176 11677.324 - 11736.902: 26.6349% ( 129) 00:08:54.176 11736.902 - 11796.480: 27.8767% ( 120) 00:08:54.176 11796.480 - 11856.058: 29.2632% ( 134) 00:08:54.176 11856.058 - 11915.636: 30.4636% ( 116) 00:08:54.176 11915.636 - 11975.215: 31.6639% ( 116) 00:08:54.176 11975.215 - 12034.793: 32.8849% ( 118) 00:08:54.176 12034.793 - 12094.371: 34.2094% ( 128) 00:08:54.176 12094.371 - 12153.949: 35.5443% ( 129) 00:08:54.176 12153.949 - 12213.527: 36.7032% ( 112) 00:08:54.176 12213.527 - 12273.105: 37.8829% ( 114) 00:08:54.176 12273.105 - 12332.684: 39.1660% ( 124) 00:08:54.176 12332.684 - 12392.262: 40.4387% ( 123) 00:08:54.176 12392.262 - 12451.840: 41.7219% ( 124) 00:08:54.176 12451.840 - 12511.418: 43.1291% ( 136) 00:08:54.176 12511.418 - 12570.996: 44.5468% ( 137) 00:08:54.176 12570.996 - 12630.575: 45.9023% ( 131) 00:08:54.176 12630.575 - 12690.153: 47.1026% ( 116) 00:08:54.176 12690.153 - 12749.731: 48.2099% ( 107) 00:08:54.176 12749.731 - 12809.309: 49.2653% ( 102) 00:08:54.176 12809.309 - 12868.887: 50.3932% ( 109) 00:08:54.176 12868.887 - 12928.465: 51.3555% ( 93) 00:08:54.176 12928.465 - 12988.044: 52.6283% ( 123) 00:08:54.176 12988.044 - 13047.622: 53.8907% ( 122) 00:08:54.176 13047.622 - 13107.200: 54.9565% ( 103) 00:08:54.176 13107.200 - 13166.778: 56.1465% ( 115) 00:08:54.176 13166.778 - 13226.356: 57.1296% ( 95) 00:08:54.176 13226.356 - 13285.935: 58.2885% ( 112) 00:08:54.176 13285.935 - 13345.513: 59.4578% ( 113) 00:08:54.176 13345.513 - 13405.091: 60.5753% ( 108) 00:08:54.176 13405.091 - 13464.669: 61.7860% ( 117) 00:08:54.176 13464.669 - 13524.247: 62.8415% ( 102) 00:08:54.176 13524.247 - 13583.825: 64.1453% ( 126) 00:08:54.176 13583.825 - 13643.404: 65.5112% ( 132) 00:08:54.176 13643.404 - 13702.982: 66.8046% ( 125) 00:08:54.176 13702.982 - 13762.560: 68.0360% ( 119) 00:08:54.176 13762.560 - 13822.138: 69.3605% ( 128) 00:08:54.176 13822.138 - 13881.716: 70.5505% ( 115) 00:08:54.176 13881.716 - 13941.295: 71.7198% ( 113) 00:08:54.176 13941.295 - 14000.873: 73.0029% ( 124) 00:08:54.176 14000.873 - 14060.451: 74.1929% ( 115) 00:08:54.176 14060.451 - 14120.029: 75.2587% ( 103) 00:08:54.176 14120.029 - 14179.607: 76.4176% ( 112) 00:08:54.176 14179.607 - 14239.185: 77.5455% ( 109) 00:08:54.176 14239.185 - 14298.764: 78.5803% ( 100) 00:08:54.176 14298.764 - 14358.342: 79.5219% ( 91) 00:08:54.176 14358.342 - 14417.920: 80.4015% ( 85) 00:08:54.176 14417.920 - 14477.498: 81.3742% ( 94) 00:08:54.176 14477.498 - 14537.076: 82.1502% ( 75) 00:08:54.176 14537.076 - 14596.655: 82.9470% ( 77) 00:08:54.176 14596.655 - 14656.233: 83.7334% ( 76) 00:08:54.176 14656.233 - 14715.811: 84.4474% ( 69) 00:08:54.176 14715.811 - 14775.389: 85.0166% ( 55) 00:08:54.176 14775.389 - 14834.967: 85.6167% ( 58) 00:08:54.176 14834.967 - 14894.545: 86.1962% ( 56) 00:08:54.176 14894.545 - 14954.124: 86.7032% ( 49) 00:08:54.176 14954.124 - 15013.702: 87.1068% ( 39) 00:08:54.176 15013.702 - 15073.280: 87.5207% ( 40) 00:08:54.176 15073.280 - 15132.858: 87.9243% ( 39) 00:08:54.176 15132.858 - 15192.436: 88.3796% ( 44) 00:08:54.176 15192.436 - 15252.015: 88.8452% ( 45) 00:08:54.176 15252.015 - 15371.171: 89.8282% ( 95) 00:08:54.176 15371.171 - 15490.327: 90.5215% ( 67) 00:08:54.176 15490.327 - 15609.484: 91.0596% ( 52) 00:08:54.176 15609.484 - 15728.640: 91.6287% ( 55) 00:08:54.176 15728.640 - 15847.796: 92.2082% ( 56) 00:08:54.176 15847.796 - 15966.953: 92.8291% ( 60) 00:08:54.176 15966.953 - 16086.109: 93.3671% ( 52) 00:08:54.176 16086.109 - 16205.265: 93.8845% ( 50) 00:08:54.176 16205.265 - 16324.422: 94.4329% ( 53) 00:08:54.176 16324.422 - 16443.578: 94.8469% ( 40) 00:08:54.176 16443.578 - 16562.735: 95.3435% ( 48) 00:08:54.176 16562.735 - 16681.891: 95.7575% ( 40) 00:08:54.176 16681.891 - 16801.047: 95.9541% ( 19) 00:08:54.176 16801.047 - 16920.204: 96.1507% ( 19) 00:08:54.176 16920.204 - 17039.360: 96.3990% ( 24) 00:08:54.176 17039.360 - 17158.516: 96.5956% ( 19) 00:08:54.176 17158.516 - 17277.673: 96.7508% ( 15) 00:08:54.176 17277.673 - 17396.829: 96.8957% ( 14) 00:08:54.176 17396.829 - 17515.985: 97.0509% ( 15) 00:08:54.176 17515.985 - 17635.142: 97.2165% ( 16) 00:08:54.176 17635.142 - 17754.298: 97.4027% ( 18) 00:08:54.176 17754.298 - 17873.455: 97.5269% ( 12) 00:08:54.176 17873.455 - 17992.611: 97.5993% ( 7) 00:08:54.176 17992.611 - 18111.767: 97.6407% ( 4) 00:08:54.176 18111.767 - 18230.924: 97.7132% ( 7) 00:08:54.176 18230.924 - 18350.080: 97.8373% ( 12) 00:08:54.176 18350.080 - 18469.236: 97.9408% ( 10) 00:08:54.176 18469.236 - 18588.393: 98.0443% ( 10) 00:08:54.176 18588.393 - 18707.549: 98.1581% ( 11) 00:08:54.176 18707.549 - 18826.705: 98.2616% ( 10) 00:08:54.176 18826.705 - 18945.862: 98.3547% ( 9) 00:08:54.176 18945.862 - 19065.018: 98.3961% ( 4) 00:08:54.176 19065.018 - 19184.175: 98.4582% ( 6) 00:08:54.176 19184.175 - 19303.331: 98.4892% ( 3) 00:08:54.176 19303.331 - 19422.487: 98.5513% ( 6) 00:08:54.176 19422.487 - 19541.644: 98.6134% ( 6) 00:08:54.176 19541.644 - 19660.800: 98.6651% ( 5) 00:08:54.176 19660.800 - 19779.956: 98.6755% ( 1) 00:08:54.176 33125.469 - 33363.782: 98.6962% ( 2) 00:08:54.176 33363.782 - 33602.095: 98.7479% ( 5) 00:08:54.176 33602.095 - 33840.407: 98.8100% ( 6) 00:08:54.176 33840.407 - 34078.720: 98.8514% ( 4) 00:08:54.176 34078.720 - 34317.033: 98.9031% ( 5) 00:08:54.176 34317.033 - 34555.345: 98.9549% ( 5) 00:08:54.176 34555.345 - 34793.658: 99.0066% ( 5) 00:08:54.176 34793.658 - 35031.971: 99.0687% ( 6) 00:08:54.176 35031.971 - 35270.284: 99.1204% ( 5) 00:08:54.176 35270.284 - 35508.596: 99.1722% ( 5) 00:08:54.176 35508.596 - 35746.909: 99.2239% ( 5) 00:08:54.176 35746.909 - 35985.222: 99.2860% ( 6) 00:08:54.176 35985.222 - 36223.535: 99.3377% ( 5) 00:08:54.176 42896.291 - 43134.604: 99.3895% ( 5) 00:08:54.176 43134.604 - 43372.916: 99.4516% ( 6) 00:08:54.177 43372.916 - 43611.229: 99.4930% ( 4) 00:08:54.177 43611.229 - 43849.542: 99.5447% ( 5) 00:08:54.177 43849.542 - 44087.855: 99.6068% ( 6) 00:08:54.177 44087.855 - 44326.167: 99.6689% ( 6) 00:08:54.177 44326.167 - 44564.480: 99.7206% ( 5) 00:08:54.177 44564.480 - 44802.793: 99.7827% ( 6) 00:08:54.177 44802.793 - 45041.105: 99.8137% ( 3) 00:08:54.177 45041.105 - 45279.418: 99.8655% ( 5) 00:08:54.177 45279.418 - 45517.731: 99.9379% ( 7) 00:08:54.177 45517.731 - 45756.044: 99.9897% ( 5) 00:08:54.177 45756.044 - 45994.356: 100.0000% ( 1) 00:08:54.177 00:08:54.177 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:54.177 ============================================================================== 00:08:54.177 Range in us Cumulative IO count 00:08:54.177 9175.040 - 9234.618: 0.0103% ( 1) 00:08:54.177 9234.618 - 9294.196: 0.0517% ( 4) 00:08:54.177 9294.196 - 9353.775: 0.0828% ( 3) 00:08:54.177 9353.775 - 9413.353: 0.1242% ( 4) 00:08:54.177 9413.353 - 9472.931: 0.1656% ( 4) 00:08:54.177 9472.931 - 9532.509: 0.3208% ( 15) 00:08:54.177 9532.509 - 9592.087: 0.5174% ( 19) 00:08:54.177 9592.087 - 9651.665: 0.7036% ( 18) 00:08:54.177 9651.665 - 9711.244: 0.8796% ( 17) 00:08:54.177 9711.244 - 9770.822: 1.1072% ( 22) 00:08:54.177 9770.822 - 9830.400: 1.3555% ( 24) 00:08:54.177 9830.400 - 9889.978: 1.6142% ( 25) 00:08:54.177 9889.978 - 9949.556: 1.9350% ( 31) 00:08:54.177 9949.556 - 10009.135: 2.3903% ( 44) 00:08:54.177 10009.135 - 10068.713: 2.8663% ( 46) 00:08:54.177 10068.713 - 10128.291: 3.3216% ( 44) 00:08:54.177 10128.291 - 10187.869: 3.7562% ( 42) 00:08:54.177 10187.869 - 10247.447: 4.2322% ( 46) 00:08:54.177 10247.447 - 10307.025: 4.7082% ( 46) 00:08:54.177 10307.025 - 10366.604: 5.1945% ( 47) 00:08:54.177 10366.604 - 10426.182: 5.7430% ( 53) 00:08:54.177 10426.182 - 10485.760: 6.6018% ( 83) 00:08:54.177 10485.760 - 10545.338: 7.5331% ( 90) 00:08:54.177 10545.338 - 10604.916: 8.4230% ( 86) 00:08:54.177 10604.916 - 10664.495: 9.2612% ( 81) 00:08:54.177 10664.495 - 10724.073: 10.1511% ( 86) 00:08:54.177 10724.073 - 10783.651: 10.9272% ( 75) 00:08:54.177 10783.651 - 10843.229: 11.7653% ( 81) 00:08:54.177 10843.229 - 10902.807: 12.5621% ( 77) 00:08:54.177 10902.807 - 10962.385: 13.5244% ( 93) 00:08:54.177 10962.385 - 11021.964: 14.4764% ( 92) 00:08:54.177 11021.964 - 11081.542: 15.4180% ( 91) 00:08:54.177 11081.542 - 11141.120: 16.2666% ( 82) 00:08:54.177 11141.120 - 11200.698: 16.9495% ( 66) 00:08:54.177 11200.698 - 11260.276: 17.7049% ( 73) 00:08:54.177 11260.276 - 11319.855: 18.4603% ( 73) 00:08:54.177 11319.855 - 11379.433: 19.4536% ( 96) 00:08:54.177 11379.433 - 11439.011: 20.4781% ( 99) 00:08:54.177 11439.011 - 11498.589: 21.8129% ( 129) 00:08:54.177 11498.589 - 11558.167: 22.9408% ( 109) 00:08:54.177 11558.167 - 11617.745: 24.0687% ( 109) 00:08:54.177 11617.745 - 11677.324: 25.2070% ( 110) 00:08:54.177 11677.324 - 11736.902: 26.3349% ( 109) 00:08:54.177 11736.902 - 11796.480: 27.4421% ( 107) 00:08:54.177 11796.480 - 11856.058: 28.6631% ( 118) 00:08:54.177 11856.058 - 11915.636: 30.0704% ( 136) 00:08:54.177 11915.636 - 11975.215: 31.5501% ( 143) 00:08:54.177 11975.215 - 12034.793: 32.9263% ( 133) 00:08:54.177 12034.793 - 12094.371: 34.3647% ( 139) 00:08:54.177 12094.371 - 12153.949: 35.6064% ( 120) 00:08:54.177 12153.949 - 12213.527: 36.8274% ( 118) 00:08:54.177 12213.527 - 12273.105: 38.1623% ( 129) 00:08:54.177 12273.105 - 12332.684: 39.4040% ( 120) 00:08:54.177 12332.684 - 12392.262: 40.5629% ( 112) 00:08:54.177 12392.262 - 12451.840: 41.7219% ( 112) 00:08:54.177 12451.840 - 12511.418: 42.7877% ( 103) 00:08:54.177 12511.418 - 12570.996: 43.7914% ( 97) 00:08:54.177 12570.996 - 12630.575: 44.7641% ( 94) 00:08:54.177 12630.575 - 12690.153: 45.7988% ( 100) 00:08:54.177 12690.153 - 12749.731: 46.8853% ( 105) 00:08:54.177 12749.731 - 12809.309: 47.8270% ( 91) 00:08:54.177 12809.309 - 12868.887: 48.7583% ( 90) 00:08:54.177 12868.887 - 12928.465: 49.8241% ( 103) 00:08:54.177 12928.465 - 12988.044: 50.7347% ( 88) 00:08:54.177 12988.044 - 13047.622: 51.8005% ( 103) 00:08:54.177 13047.622 - 13107.200: 52.6697% ( 84) 00:08:54.177 13107.200 - 13166.778: 53.7148% ( 101) 00:08:54.177 13166.778 - 13226.356: 54.7703% ( 102) 00:08:54.177 13226.356 - 13285.935: 55.7223% ( 92) 00:08:54.177 13285.935 - 13345.513: 56.8088% ( 105) 00:08:54.177 13345.513 - 13405.091: 57.9884% ( 114) 00:08:54.177 13405.091 - 13464.669: 59.3336% ( 130) 00:08:54.177 13464.669 - 13524.247: 60.6478% ( 127) 00:08:54.177 13524.247 - 13583.825: 61.9412% ( 125) 00:08:54.177 13583.825 - 13643.404: 63.3071% ( 132) 00:08:54.177 13643.404 - 13702.982: 64.5281% ( 118) 00:08:54.177 13702.982 - 13762.560: 65.8216% ( 125) 00:08:54.177 13762.560 - 13822.138: 67.1254% ( 126) 00:08:54.177 13822.138 - 13881.716: 68.3982% ( 123) 00:08:54.177 13881.716 - 13941.295: 69.6813% ( 124) 00:08:54.177 13941.295 - 14000.873: 70.9851% ( 126) 00:08:54.177 14000.873 - 14060.451: 72.2682% ( 124) 00:08:54.177 14060.451 - 14120.029: 73.6238% ( 131) 00:08:54.177 14120.029 - 14179.607: 74.8241% ( 116) 00:08:54.177 14179.607 - 14239.185: 76.0348% ( 117) 00:08:54.177 14239.185 - 14298.764: 77.3075% ( 123) 00:08:54.177 14298.764 - 14358.342: 78.4872% ( 114) 00:08:54.177 14358.342 - 14417.920: 79.5840% ( 106) 00:08:54.177 14417.920 - 14477.498: 80.8464% ( 122) 00:08:54.177 14477.498 - 14537.076: 81.9536% ( 107) 00:08:54.177 14537.076 - 14596.655: 83.0608% ( 107) 00:08:54.177 14596.655 - 14656.233: 83.9300% ( 84) 00:08:54.177 14656.233 - 14715.811: 84.6854% ( 73) 00:08:54.177 14715.811 - 14775.389: 85.3063% ( 60) 00:08:54.177 14775.389 - 14834.967: 85.7719% ( 45) 00:08:54.177 14834.967 - 14894.545: 86.2272% ( 44) 00:08:54.177 14894.545 - 14954.124: 86.6515% ( 41) 00:08:54.177 14954.124 - 15013.702: 87.0757% ( 41) 00:08:54.177 15013.702 - 15073.280: 87.4793% ( 39) 00:08:54.177 15073.280 - 15132.858: 87.9243% ( 43) 00:08:54.177 15132.858 - 15192.436: 88.4002% ( 46) 00:08:54.177 15192.436 - 15252.015: 88.8142% ( 40) 00:08:54.177 15252.015 - 15371.171: 89.5075% ( 67) 00:08:54.177 15371.171 - 15490.327: 90.5008% ( 96) 00:08:54.177 15490.327 - 15609.484: 91.3390% ( 81) 00:08:54.177 15609.484 - 15728.640: 92.0426% ( 68) 00:08:54.177 15728.640 - 15847.796: 92.7980% ( 73) 00:08:54.177 15847.796 - 15966.953: 93.4913% ( 67) 00:08:54.177 15966.953 - 16086.109: 94.3812% ( 86) 00:08:54.177 16086.109 - 16205.265: 95.2297% ( 82) 00:08:54.177 16205.265 - 16324.422: 95.9541% ( 70) 00:08:54.177 16324.422 - 16443.578: 96.5749% ( 60) 00:08:54.177 16443.578 - 16562.735: 97.0509% ( 46) 00:08:54.177 16562.735 - 16681.891: 97.2889% ( 23) 00:08:54.177 16681.891 - 16801.047: 97.5993% ( 30) 00:08:54.177 16801.047 - 16920.204: 97.9201% ( 31) 00:08:54.177 16920.204 - 17039.360: 98.1892% ( 26) 00:08:54.177 17039.360 - 17158.516: 98.2926% ( 10) 00:08:54.177 17158.516 - 17277.673: 98.3547% ( 6) 00:08:54.177 17277.673 - 17396.829: 98.4065% ( 5) 00:08:54.177 17396.829 - 17515.985: 98.4685% ( 6) 00:08:54.177 17515.985 - 17635.142: 98.5203% ( 5) 00:08:54.177 17635.142 - 17754.298: 98.5824% ( 6) 00:08:54.177 17754.298 - 17873.455: 98.6445% ( 6) 00:08:54.177 17873.455 - 17992.611: 98.6755% ( 3) 00:08:54.177 30980.655 - 31218.967: 98.6962% ( 2) 00:08:54.177 31218.967 - 31457.280: 98.7479% ( 5) 00:08:54.177 31457.280 - 31695.593: 98.8100% ( 6) 00:08:54.177 31695.593 - 31933.905: 98.8618% ( 5) 00:08:54.177 31933.905 - 32172.218: 98.9135% ( 5) 00:08:54.177 32172.218 - 32410.531: 98.9756% ( 6) 00:08:54.177 32410.531 - 32648.844: 99.0377% ( 6) 00:08:54.177 32648.844 - 32887.156: 99.0998% ( 6) 00:08:54.177 32887.156 - 33125.469: 99.1515% ( 5) 00:08:54.177 33125.469 - 33363.782: 99.2032% ( 5) 00:08:54.177 33363.782 - 33602.095: 99.2653% ( 6) 00:08:54.177 33602.095 - 33840.407: 99.3274% ( 6) 00:08:54.177 33840.407 - 34078.720: 99.3377% ( 1) 00:08:54.177 39559.913 - 39798.225: 99.3688% ( 3) 00:08:54.177 39798.225 - 40036.538: 99.4309% ( 6) 00:08:54.177 40036.538 - 40274.851: 99.4930% ( 6) 00:08:54.177 40274.851 - 40513.164: 99.5447% ( 5) 00:08:54.177 40513.164 - 40751.476: 99.6068% ( 6) 00:08:54.177 40751.476 - 40989.789: 99.6689% ( 6) 00:08:54.177 40989.789 - 41228.102: 99.7310% ( 6) 00:08:54.177 41228.102 - 41466.415: 99.7930% ( 6) 00:08:54.177 41466.415 - 41704.727: 99.8551% ( 6) 00:08:54.177 41704.727 - 41943.040: 99.8965% ( 4) 00:08:54.177 41943.040 - 42181.353: 99.9586% ( 6) 00:08:54.177 42181.353 - 42419.665: 100.0000% ( 4) 00:08:54.177 00:08:54.177 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:54.177 ============================================================================== 00:08:54.177 Range in us Cumulative IO count 00:08:54.177 9472.931 - 9532.509: 0.0207% ( 2) 00:08:54.177 9532.509 - 9592.087: 0.1552% ( 13) 00:08:54.177 9592.087 - 9651.665: 0.3208% ( 16) 00:08:54.177 9651.665 - 9711.244: 0.5588% ( 23) 00:08:54.177 9711.244 - 9770.822: 0.9002% ( 33) 00:08:54.177 9770.822 - 9830.400: 1.3452% ( 43) 00:08:54.178 9830.400 - 9889.978: 1.6867% ( 33) 00:08:54.178 9889.978 - 9949.556: 2.0281% ( 33) 00:08:54.178 9949.556 - 10009.135: 2.2868% ( 25) 00:08:54.178 10009.135 - 10068.713: 2.6076% ( 31) 00:08:54.178 10068.713 - 10128.291: 2.9180% ( 30) 00:08:54.178 10128.291 - 10187.869: 3.3216% ( 39) 00:08:54.178 10187.869 - 10247.447: 3.7459% ( 41) 00:08:54.178 10247.447 - 10307.025: 4.3150% ( 55) 00:08:54.178 10307.025 - 10366.604: 4.9358% ( 60) 00:08:54.178 10366.604 - 10426.182: 5.5877% ( 63) 00:08:54.178 10426.182 - 10485.760: 6.3121% ( 70) 00:08:54.178 10485.760 - 10545.338: 7.0882% ( 75) 00:08:54.178 10545.338 - 10604.916: 7.8228% ( 71) 00:08:54.178 10604.916 - 10664.495: 8.5472% ( 70) 00:08:54.178 10664.495 - 10724.073: 9.3750% ( 80) 00:08:54.178 10724.073 - 10783.651: 10.3891% ( 98) 00:08:54.178 10783.651 - 10843.229: 11.2790% ( 86) 00:08:54.178 10843.229 - 10902.807: 12.0447% ( 74) 00:08:54.178 10902.807 - 10962.385: 12.7173% ( 65) 00:08:54.178 10962.385 - 11021.964: 13.4520% ( 71) 00:08:54.178 11021.964 - 11081.542: 14.2695% ( 79) 00:08:54.178 11081.542 - 11141.120: 15.1490% ( 85) 00:08:54.178 11141.120 - 11200.698: 16.0079% ( 83) 00:08:54.178 11200.698 - 11260.276: 16.9288% ( 89) 00:08:54.178 11260.276 - 11319.855: 17.9222% ( 96) 00:08:54.178 11319.855 - 11379.433: 18.9259% ( 97) 00:08:54.178 11379.433 - 11439.011: 20.1262% ( 116) 00:08:54.178 11439.011 - 11498.589: 21.4300% ( 126) 00:08:54.178 11498.589 - 11558.167: 22.8477% ( 137) 00:08:54.178 11558.167 - 11617.745: 24.3067% ( 141) 00:08:54.178 11617.745 - 11677.324: 25.8589% ( 150) 00:08:54.178 11677.324 - 11736.902: 27.3179% ( 141) 00:08:54.178 11736.902 - 11796.480: 28.7148% ( 135) 00:08:54.178 11796.480 - 11856.058: 30.1325% ( 137) 00:08:54.178 11856.058 - 11915.636: 31.5604% ( 138) 00:08:54.178 11915.636 - 11975.215: 32.8228% ( 122) 00:08:54.178 11975.215 - 12034.793: 34.1370% ( 127) 00:08:54.178 12034.793 - 12094.371: 35.3994% ( 122) 00:08:54.178 12094.371 - 12153.949: 36.7446% ( 130) 00:08:54.178 12153.949 - 12213.527: 38.0381% ( 125) 00:08:54.178 12213.527 - 12273.105: 39.3419% ( 126) 00:08:54.178 12273.105 - 12332.684: 40.4284% ( 105) 00:08:54.178 12332.684 - 12392.262: 41.5563% ( 109) 00:08:54.178 12392.262 - 12451.840: 42.7152% ( 112) 00:08:54.178 12451.840 - 12511.418: 43.9466% ( 119) 00:08:54.178 12511.418 - 12570.996: 45.1366% ( 115) 00:08:54.178 12570.996 - 12630.575: 46.1196% ( 95) 00:08:54.178 12630.575 - 12690.153: 47.0302% ( 88) 00:08:54.178 12690.153 - 12749.731: 47.8787% ( 82) 00:08:54.178 12749.731 - 12809.309: 48.6962% ( 79) 00:08:54.178 12809.309 - 12868.887: 49.5550% ( 83) 00:08:54.178 12868.887 - 12928.465: 50.5174% ( 93) 00:08:54.178 12928.465 - 12988.044: 51.4590% ( 91) 00:08:54.178 12988.044 - 13047.622: 52.4834% ( 99) 00:08:54.178 13047.622 - 13107.200: 53.2802% ( 77) 00:08:54.178 13107.200 - 13166.778: 54.1494% ( 84) 00:08:54.178 13166.778 - 13226.356: 55.1635% ( 98) 00:08:54.178 13226.356 - 13285.935: 56.0637% ( 87) 00:08:54.178 13285.935 - 13345.513: 57.0364% ( 94) 00:08:54.178 13345.513 - 13405.091: 57.8849% ( 82) 00:08:54.178 13405.091 - 13464.669: 59.0542% ( 113) 00:08:54.178 13464.669 - 13524.247: 60.3270% ( 123) 00:08:54.178 13524.247 - 13583.825: 61.8067% ( 143) 00:08:54.178 13583.825 - 13643.404: 63.2657% ( 141) 00:08:54.178 13643.404 - 13702.982: 64.6420% ( 133) 00:08:54.178 13702.982 - 13762.560: 66.1217% ( 143) 00:08:54.178 13762.560 - 13822.138: 67.5083% ( 134) 00:08:54.178 13822.138 - 13881.716: 68.8742% ( 132) 00:08:54.178 13881.716 - 13941.295: 70.2608% ( 134) 00:08:54.178 13941.295 - 14000.873: 71.7198% ( 141) 00:08:54.178 14000.873 - 14060.451: 73.1064% ( 134) 00:08:54.178 14060.451 - 14120.029: 74.3688% ( 122) 00:08:54.178 14120.029 - 14179.607: 75.6002% ( 119) 00:08:54.178 14179.607 - 14239.185: 76.6867% ( 105) 00:08:54.178 14239.185 - 14298.764: 77.8042% ( 108) 00:08:54.178 14298.764 - 14358.342: 78.9218% ( 108) 00:08:54.178 14358.342 - 14417.920: 80.0497% ( 109) 00:08:54.178 14417.920 - 14477.498: 81.1051% ( 102) 00:08:54.178 14477.498 - 14537.076: 82.0364% ( 90) 00:08:54.178 14537.076 - 14596.655: 82.9470% ( 88) 00:08:54.178 14596.655 - 14656.233: 83.8473% ( 87) 00:08:54.178 14656.233 - 14715.811: 84.6026% ( 73) 00:08:54.178 14715.811 - 14775.389: 85.2339% ( 61) 00:08:54.178 14775.389 - 14834.967: 85.8133% ( 56) 00:08:54.178 14834.967 - 14894.545: 86.3100% ( 48) 00:08:54.178 14894.545 - 14954.124: 86.7757% ( 45) 00:08:54.178 14954.124 - 15013.702: 87.1792% ( 39) 00:08:54.178 15013.702 - 15073.280: 87.5000% ( 31) 00:08:54.178 15073.280 - 15132.858: 87.8829% ( 37) 00:08:54.178 15132.858 - 15192.436: 88.2036% ( 31) 00:08:54.178 15192.436 - 15252.015: 88.5037% ( 29) 00:08:54.178 15252.015 - 15371.171: 89.2798% ( 75) 00:08:54.178 15371.171 - 15490.327: 90.1283% ( 82) 00:08:54.178 15490.327 - 15609.484: 91.0079% ( 85) 00:08:54.178 15609.484 - 15728.640: 91.7839% ( 75) 00:08:54.178 15728.640 - 15847.796: 92.5290% ( 72) 00:08:54.178 15847.796 - 15966.953: 93.3361% ( 78) 00:08:54.178 15966.953 - 16086.109: 94.1950% ( 83) 00:08:54.178 16086.109 - 16205.265: 94.9814% ( 76) 00:08:54.178 16205.265 - 16324.422: 95.6643% ( 66) 00:08:54.178 16324.422 - 16443.578: 96.2955% ( 61) 00:08:54.178 16443.578 - 16562.735: 96.8957% ( 58) 00:08:54.178 16562.735 - 16681.891: 97.3406% ( 43) 00:08:54.178 16681.891 - 16801.047: 97.6718% ( 32) 00:08:54.178 16801.047 - 16920.204: 97.9719% ( 29) 00:08:54.178 16920.204 - 17039.360: 98.2305% ( 25) 00:08:54.178 17039.360 - 17158.516: 98.4478% ( 21) 00:08:54.178 17158.516 - 17277.673: 98.5513% ( 10) 00:08:54.178 17277.673 - 17396.829: 98.6341% ( 8) 00:08:54.178 17396.829 - 17515.985: 98.6755% ( 4) 00:08:54.178 29193.309 - 29312.465: 98.7790% ( 10) 00:08:54.178 29312.465 - 29431.622: 98.8100% ( 3) 00:08:54.178 29431.622 - 29550.778: 98.8307% ( 2) 00:08:54.178 29550.778 - 29669.935: 98.8514% ( 2) 00:08:54.178 29669.935 - 29789.091: 98.8825% ( 3) 00:08:54.178 29789.091 - 29908.247: 98.9031% ( 2) 00:08:54.178 29908.247 - 30027.404: 98.9342% ( 3) 00:08:54.178 30027.404 - 30146.560: 98.9549% ( 2) 00:08:54.178 30146.560 - 30265.716: 98.9859% ( 3) 00:08:54.178 30265.716 - 30384.873: 99.0066% ( 2) 00:08:54.178 30384.873 - 30504.029: 99.0480% ( 4) 00:08:54.178 30504.029 - 30742.342: 99.1101% ( 6) 00:08:54.178 30742.342 - 30980.655: 99.1618% ( 5) 00:08:54.178 30980.655 - 31218.967: 99.2136% ( 5) 00:08:54.178 31218.967 - 31457.280: 99.2757% ( 6) 00:08:54.178 31457.280 - 31695.593: 99.3274% ( 5) 00:08:54.178 31695.593 - 31933.905: 99.3377% ( 1) 00:08:54.178 36700.160 - 36938.473: 99.3481% ( 1) 00:08:54.178 36938.473 - 37176.785: 99.4102% ( 6) 00:08:54.178 37176.785 - 37415.098: 99.4723% ( 6) 00:08:54.178 37415.098 - 37653.411: 99.5137% ( 4) 00:08:54.178 37653.411 - 37891.724: 99.5757% ( 6) 00:08:54.178 37891.724 - 38130.036: 99.6275% ( 5) 00:08:54.178 38130.036 - 38368.349: 99.6896% ( 6) 00:08:54.178 38368.349 - 38606.662: 99.7517% ( 6) 00:08:54.178 38606.662 - 38844.975: 99.8034% ( 5) 00:08:54.178 38844.975 - 39083.287: 99.8655% ( 6) 00:08:54.178 39083.287 - 39321.600: 99.9172% ( 5) 00:08:54.178 39321.600 - 39559.913: 99.9793% ( 6) 00:08:54.178 39559.913 - 39798.225: 100.0000% ( 2) 00:08:54.178 00:08:54.178 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:54.178 ============================================================================== 00:08:54.178 Range in us Cumulative IO count 00:08:54.178 9353.775 - 9413.353: 0.0517% ( 5) 00:08:54.178 9413.353 - 9472.931: 0.1138% ( 6) 00:08:54.178 9472.931 - 9532.509: 0.1759% ( 6) 00:08:54.178 9532.509 - 9592.087: 0.2690% ( 9) 00:08:54.178 9592.087 - 9651.665: 0.4967% ( 22) 00:08:54.178 9651.665 - 9711.244: 0.7864% ( 28) 00:08:54.178 9711.244 - 9770.822: 1.0141% ( 22) 00:08:54.178 9770.822 - 9830.400: 1.2314% ( 21) 00:08:54.178 9830.400 - 9889.978: 1.4280% ( 19) 00:08:54.178 9889.978 - 9949.556: 1.6763% ( 24) 00:08:54.178 9949.556 - 10009.135: 1.9557% ( 27) 00:08:54.178 10009.135 - 10068.713: 2.2868% ( 32) 00:08:54.178 10068.713 - 10128.291: 2.6490% ( 35) 00:08:54.178 10128.291 - 10187.869: 3.0526% ( 39) 00:08:54.178 10187.869 - 10247.447: 3.6734% ( 60) 00:08:54.178 10247.447 - 10307.025: 4.3771% ( 68) 00:08:54.178 10307.025 - 10366.604: 5.0807% ( 68) 00:08:54.178 10366.604 - 10426.182: 5.9292% ( 82) 00:08:54.178 10426.182 - 10485.760: 6.7260% ( 77) 00:08:54.178 10485.760 - 10545.338: 7.4917% ( 74) 00:08:54.178 10545.338 - 10604.916: 8.4023% ( 88) 00:08:54.178 10604.916 - 10664.495: 9.3543% ( 92) 00:08:54.178 10664.495 - 10724.073: 10.1511% ( 77) 00:08:54.178 10724.073 - 10783.651: 10.9375% ( 76) 00:08:54.178 10783.651 - 10843.229: 11.7239% ( 76) 00:08:54.178 10843.229 - 10902.807: 12.5621% ( 81) 00:08:54.178 10902.807 - 10962.385: 13.4106% ( 82) 00:08:54.178 10962.385 - 11021.964: 14.3005% ( 86) 00:08:54.178 11021.964 - 11081.542: 15.2007% ( 87) 00:08:54.178 11081.542 - 11141.120: 16.1320% ( 90) 00:08:54.178 11141.120 - 11200.698: 17.0012% ( 84) 00:08:54.178 11200.698 - 11260.276: 17.9222% ( 89) 00:08:54.178 11260.276 - 11319.855: 18.8535% ( 90) 00:08:54.178 11319.855 - 11379.433: 19.9917% ( 110) 00:08:54.178 11379.433 - 11439.011: 21.0161% ( 99) 00:08:54.178 11439.011 - 11498.589: 22.1130% ( 106) 00:08:54.179 11498.589 - 11558.167: 23.6341% ( 147) 00:08:54.179 11558.167 - 11617.745: 25.0931% ( 141) 00:08:54.179 11617.745 - 11677.324: 26.4383% ( 130) 00:08:54.179 11677.324 - 11736.902: 28.0319% ( 154) 00:08:54.179 11736.902 - 11796.480: 29.5012% ( 142) 00:08:54.179 11796.480 - 11856.058: 30.8257% ( 128) 00:08:54.179 11856.058 - 11915.636: 32.2227% ( 135) 00:08:54.179 11915.636 - 11975.215: 33.5368% ( 127) 00:08:54.179 11975.215 - 12034.793: 34.8924% ( 131) 00:08:54.179 12034.793 - 12094.371: 36.1341% ( 120) 00:08:54.179 12094.371 - 12153.949: 37.3758% ( 120) 00:08:54.179 12153.949 - 12213.527: 38.5969% ( 118) 00:08:54.179 12213.527 - 12273.105: 39.8179% ( 118) 00:08:54.179 12273.105 - 12332.684: 41.1734% ( 131) 00:08:54.179 12332.684 - 12392.262: 42.2185% ( 101) 00:08:54.179 12392.262 - 12451.840: 43.3775% ( 112) 00:08:54.179 12451.840 - 12511.418: 44.4123% ( 100) 00:08:54.179 12511.418 - 12570.996: 45.3332% ( 89) 00:08:54.179 12570.996 - 12630.575: 46.2231% ( 86) 00:08:54.179 12630.575 - 12690.153: 47.1751% ( 92) 00:08:54.179 12690.153 - 12749.731: 48.1788% ( 97) 00:08:54.179 12749.731 - 12809.309: 49.0377% ( 83) 00:08:54.179 12809.309 - 12868.887: 49.8344% ( 77) 00:08:54.179 12868.887 - 12928.465: 50.4863% ( 63) 00:08:54.179 12928.465 - 12988.044: 51.2521% ( 74) 00:08:54.179 12988.044 - 13047.622: 52.1834% ( 90) 00:08:54.179 13047.622 - 13107.200: 52.8974% ( 69) 00:08:54.179 13107.200 - 13166.778: 53.7562% ( 83) 00:08:54.179 13166.778 - 13226.356: 54.6772% ( 89) 00:08:54.179 13226.356 - 13285.935: 55.5464% ( 84) 00:08:54.179 13285.935 - 13345.513: 56.5294% ( 95) 00:08:54.179 13345.513 - 13405.091: 57.5331% ( 97) 00:08:54.179 13405.091 - 13464.669: 58.5782% ( 101) 00:08:54.179 13464.669 - 13524.247: 59.6337% ( 102) 00:08:54.179 13524.247 - 13583.825: 60.8133% ( 114) 00:08:54.179 13583.825 - 13643.404: 62.0137% ( 116) 00:08:54.179 13643.404 - 13702.982: 63.5244% ( 146) 00:08:54.179 13702.982 - 13762.560: 64.9007% ( 133) 00:08:54.179 13762.560 - 13822.138: 66.2252% ( 128) 00:08:54.179 13822.138 - 13881.716: 67.6118% ( 134) 00:08:54.179 13881.716 - 13941.295: 68.7914% ( 114) 00:08:54.179 13941.295 - 14000.873: 70.0538% ( 122) 00:08:54.179 14000.873 - 14060.451: 71.3369% ( 124) 00:08:54.179 14060.451 - 14120.029: 72.6200% ( 124) 00:08:54.179 14120.029 - 14179.607: 73.9859% ( 132) 00:08:54.179 14179.607 - 14239.185: 75.2794% ( 125) 00:08:54.179 14239.185 - 14298.764: 76.5935% ( 127) 00:08:54.179 14298.764 - 14358.342: 77.8560% ( 122) 00:08:54.179 14358.342 - 14417.920: 79.1080% ( 121) 00:08:54.179 14417.920 - 14477.498: 80.2980% ( 115) 00:08:54.179 14477.498 - 14537.076: 81.5087% ( 117) 00:08:54.179 14537.076 - 14596.655: 82.5435% ( 100) 00:08:54.179 14596.655 - 14656.233: 83.4437% ( 87) 00:08:54.179 14656.233 - 14715.811: 84.4164% ( 94) 00:08:54.179 14715.811 - 14775.389: 85.2132% ( 77) 00:08:54.179 14775.389 - 14834.967: 85.8237% ( 59) 00:08:54.179 14834.967 - 14894.545: 86.3618% ( 52) 00:08:54.179 14894.545 - 14954.124: 86.9102% ( 53) 00:08:54.179 14954.124 - 15013.702: 87.3965% ( 47) 00:08:54.179 15013.702 - 15073.280: 87.8932% ( 48) 00:08:54.179 15073.280 - 15132.858: 88.4106% ( 50) 00:08:54.179 15132.858 - 15192.436: 88.8142% ( 39) 00:08:54.179 15192.436 - 15252.015: 89.1763% ( 35) 00:08:54.179 15252.015 - 15371.171: 89.9110% ( 71) 00:08:54.179 15371.171 - 15490.327: 90.6871% ( 75) 00:08:54.179 15490.327 - 15609.484: 91.3079% ( 60) 00:08:54.179 15609.484 - 15728.640: 92.0116% ( 68) 00:08:54.179 15728.640 - 15847.796: 92.7670% ( 73) 00:08:54.179 15847.796 - 15966.953: 93.3464% ( 56) 00:08:54.179 15966.953 - 16086.109: 94.0294% ( 66) 00:08:54.179 16086.109 - 16205.265: 94.6296% ( 58) 00:08:54.179 16205.265 - 16324.422: 95.1676% ( 52) 00:08:54.179 16324.422 - 16443.578: 95.7678% ( 58) 00:08:54.179 16443.578 - 16562.735: 96.3783% ( 59) 00:08:54.179 16562.735 - 16681.891: 96.7922% ( 40) 00:08:54.179 16681.891 - 16801.047: 97.1751% ( 37) 00:08:54.179 16801.047 - 16920.204: 97.5062% ( 32) 00:08:54.179 16920.204 - 17039.360: 97.8063% ( 29) 00:08:54.179 17039.360 - 17158.516: 98.0960% ( 28) 00:08:54.179 17158.516 - 17277.673: 98.2823% ( 18) 00:08:54.179 17277.673 - 17396.829: 98.4065% ( 12) 00:08:54.179 17396.829 - 17515.985: 98.4685% ( 6) 00:08:54.179 17515.985 - 17635.142: 98.5306% ( 6) 00:08:54.179 17635.142 - 17754.298: 98.5824% ( 5) 00:08:54.179 17754.298 - 17873.455: 98.6445% ( 6) 00:08:54.179 17873.455 - 17992.611: 98.6755% ( 3) 00:08:54.179 25618.618 - 25737.775: 98.6858% ( 1) 00:08:54.179 25737.775 - 25856.931: 98.7065% ( 2) 00:08:54.179 25856.931 - 25976.087: 98.7272% ( 2) 00:08:54.179 25976.087 - 26095.244: 98.7583% ( 3) 00:08:54.179 26095.244 - 26214.400: 98.7893% ( 3) 00:08:54.179 26214.400 - 26333.556: 98.8204% ( 3) 00:08:54.179 26333.556 - 26452.713: 98.8514% ( 3) 00:08:54.179 26452.713 - 26571.869: 98.8721% ( 2) 00:08:54.179 26571.869 - 26691.025: 98.9031% ( 3) 00:08:54.179 26691.025 - 26810.182: 98.9342% ( 3) 00:08:54.179 26810.182 - 26929.338: 98.9652% ( 3) 00:08:54.179 26929.338 - 27048.495: 98.9963% ( 3) 00:08:54.179 27048.495 - 27167.651: 99.0170% ( 2) 00:08:54.179 27167.651 - 27286.807: 99.0480% ( 3) 00:08:54.179 27286.807 - 27405.964: 99.0791% ( 3) 00:08:54.179 27405.964 - 27525.120: 99.1101% ( 3) 00:08:54.179 27525.120 - 27644.276: 99.1308% ( 2) 00:08:54.179 27644.276 - 27763.433: 99.1618% ( 3) 00:08:54.179 27763.433 - 27882.589: 99.1929% ( 3) 00:08:54.179 27882.589 - 28001.745: 99.2239% ( 3) 00:08:54.179 28001.745 - 28120.902: 99.2550% ( 3) 00:08:54.179 28120.902 - 28240.058: 99.2860% ( 3) 00:08:54.179 28240.058 - 28359.215: 99.3067% ( 2) 00:08:54.179 28359.215 - 28478.371: 99.3377% ( 3) 00:08:54.179 33363.782 - 33602.095: 99.3688% ( 3) 00:08:54.179 33602.095 - 33840.407: 99.4309% ( 6) 00:08:54.179 33840.407 - 34078.720: 99.4930% ( 6) 00:08:54.179 34078.720 - 34317.033: 99.5550% ( 6) 00:08:54.179 34317.033 - 34555.345: 99.6068% ( 5) 00:08:54.179 34555.345 - 34793.658: 99.6689% ( 6) 00:08:54.179 34793.658 - 35031.971: 99.7206% ( 5) 00:08:54.179 35031.971 - 35270.284: 99.7827% ( 6) 00:08:54.179 35270.284 - 35508.596: 99.8448% ( 6) 00:08:54.179 35508.596 - 35746.909: 99.8965% ( 5) 00:08:54.179 35746.909 - 35985.222: 99.9483% ( 5) 00:08:54.179 35985.222 - 36223.535: 100.0000% ( 5) 00:08:54.179 00:08:54.179 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:54.179 ============================================================================== 00:08:54.179 Range in us Cumulative IO count 00:08:54.179 9115.462 - 9175.040: 0.0310% ( 3) 00:08:54.179 9175.040 - 9234.618: 0.0828% ( 5) 00:08:54.179 9234.618 - 9294.196: 0.1345% ( 5) 00:08:54.179 9294.196 - 9353.775: 0.1863% ( 5) 00:08:54.179 9353.775 - 9413.353: 0.2794% ( 9) 00:08:54.179 9413.353 - 9472.931: 0.3415% ( 6) 00:08:54.179 9472.931 - 9532.509: 0.4450% ( 10) 00:08:54.179 9532.509 - 9592.087: 0.5898% ( 14) 00:08:54.179 9592.087 - 9651.665: 0.8175% ( 22) 00:08:54.179 9651.665 - 9711.244: 1.0244% ( 20) 00:08:54.179 9711.244 - 9770.822: 1.1900% ( 16) 00:08:54.179 9770.822 - 9830.400: 1.3142% ( 12) 00:08:54.179 9830.400 - 9889.978: 1.5108% ( 19) 00:08:54.179 9889.978 - 9949.556: 1.7074% ( 19) 00:08:54.179 9949.556 - 10009.135: 1.9971% ( 28) 00:08:54.179 10009.135 - 10068.713: 2.3282% ( 32) 00:08:54.179 10068.713 - 10128.291: 2.8353% ( 49) 00:08:54.179 10128.291 - 10187.869: 3.3733% ( 52) 00:08:54.179 10187.869 - 10247.447: 3.9942% ( 60) 00:08:54.179 10247.447 - 10307.025: 4.5530% ( 54) 00:08:54.179 10307.025 - 10366.604: 5.0083% ( 44) 00:08:54.179 10366.604 - 10426.182: 5.6291% ( 60) 00:08:54.179 10426.182 - 10485.760: 6.2707% ( 62) 00:08:54.179 10485.760 - 10545.338: 6.9847% ( 69) 00:08:54.179 10545.338 - 10604.916: 7.7918% ( 78) 00:08:54.179 10604.916 - 10664.495: 8.7231% ( 90) 00:08:54.179 10664.495 - 10724.073: 9.5820% ( 83) 00:08:54.179 10724.073 - 10783.651: 10.3373% ( 73) 00:08:54.179 10783.651 - 10843.229: 11.1341% ( 77) 00:08:54.179 10843.229 - 10902.807: 11.8481% ( 69) 00:08:54.179 10902.807 - 10962.385: 12.5828% ( 71) 00:08:54.179 10962.385 - 11021.964: 13.3899% ( 78) 00:08:54.179 11021.964 - 11081.542: 14.1763% ( 76) 00:08:54.179 11081.542 - 11141.120: 15.1283% ( 92) 00:08:54.179 11141.120 - 11200.698: 16.2355% ( 107) 00:08:54.179 11200.698 - 11260.276: 17.3324% ( 106) 00:08:54.179 11260.276 - 11319.855: 18.3982% ( 103) 00:08:54.179 11319.855 - 11379.433: 19.4743% ( 104) 00:08:54.179 11379.433 - 11439.011: 20.6954% ( 118) 00:08:54.179 11439.011 - 11498.589: 21.7612% ( 103) 00:08:54.179 11498.589 - 11558.167: 22.8063% ( 101) 00:08:54.179 11558.167 - 11617.745: 24.0170% ( 117) 00:08:54.179 11617.745 - 11677.324: 25.3725% ( 131) 00:08:54.179 11677.324 - 11736.902: 26.8833% ( 146) 00:08:54.179 11736.902 - 11796.480: 28.3733% ( 144) 00:08:54.179 11796.480 - 11856.058: 29.7806% ( 136) 00:08:54.179 11856.058 - 11915.636: 31.2190% ( 139) 00:08:54.179 11915.636 - 11975.215: 32.5331% ( 127) 00:08:54.179 11975.215 - 12034.793: 33.9094% ( 133) 00:08:54.179 12034.793 - 12094.371: 35.2442% ( 129) 00:08:54.179 12094.371 - 12153.949: 36.5687% ( 128) 00:08:54.179 12153.949 - 12213.527: 38.0381% ( 142) 00:08:54.179 12213.527 - 12273.105: 39.1453% ( 107) 00:08:54.180 12273.105 - 12332.684: 40.3353% ( 115) 00:08:54.180 12332.684 - 12392.262: 41.5046% ( 113) 00:08:54.180 12392.262 - 12451.840: 42.7152% ( 117) 00:08:54.180 12451.840 - 12511.418: 43.8121% ( 106) 00:08:54.180 12511.418 - 12570.996: 44.7537% ( 91) 00:08:54.180 12570.996 - 12630.575: 45.6747% ( 89) 00:08:54.180 12630.575 - 12690.153: 46.6370% ( 93) 00:08:54.180 12690.153 - 12749.731: 47.5062% ( 84) 00:08:54.180 12749.731 - 12809.309: 48.4478% ( 91) 00:08:54.180 12809.309 - 12868.887: 49.3481% ( 87) 00:08:54.180 12868.887 - 12928.465: 50.3622% ( 98) 00:08:54.180 12928.465 - 12988.044: 51.3762% ( 98) 00:08:54.180 12988.044 - 13047.622: 52.6904% ( 127) 00:08:54.180 13047.622 - 13107.200: 53.8493% ( 112) 00:08:54.180 13107.200 - 13166.778: 54.8427% ( 96) 00:08:54.180 13166.778 - 13226.356: 55.7637% ( 89) 00:08:54.180 13226.356 - 13285.935: 56.7570% ( 96) 00:08:54.180 13285.935 - 13345.513: 57.7711% ( 98) 00:08:54.180 13345.513 - 13405.091: 58.8473% ( 104) 00:08:54.180 13405.091 - 13464.669: 60.0476% ( 116) 00:08:54.180 13464.669 - 13524.247: 61.2583% ( 117) 00:08:54.180 13524.247 - 13583.825: 62.6966% ( 139) 00:08:54.180 13583.825 - 13643.404: 64.0108% ( 127) 00:08:54.180 13643.404 - 13702.982: 65.3249% ( 127) 00:08:54.180 13702.982 - 13762.560: 66.5873% ( 122) 00:08:54.180 13762.560 - 13822.138: 68.0877% ( 145) 00:08:54.180 13822.138 - 13881.716: 69.4847% ( 135) 00:08:54.180 13881.716 - 13941.295: 70.8402% ( 131) 00:08:54.180 13941.295 - 14000.873: 71.9888% ( 111) 00:08:54.180 14000.873 - 14060.451: 73.2512% ( 122) 00:08:54.180 14060.451 - 14120.029: 74.4102% ( 112) 00:08:54.180 14120.029 - 14179.607: 75.6623% ( 121) 00:08:54.180 14179.607 - 14239.185: 76.8419% ( 114) 00:08:54.180 14239.185 - 14298.764: 77.9594% ( 108) 00:08:54.180 14298.764 - 14358.342: 79.1598% ( 116) 00:08:54.180 14358.342 - 14417.920: 80.2980% ( 110) 00:08:54.180 14417.920 - 14477.498: 81.3328% ( 100) 00:08:54.180 14477.498 - 14537.076: 82.2537% ( 89) 00:08:54.180 14537.076 - 14596.655: 83.0298% ( 75) 00:08:54.180 14596.655 - 14656.233: 83.7024% ( 65) 00:08:54.180 14656.233 - 14715.811: 84.3233% ( 60) 00:08:54.180 14715.811 - 14775.389: 84.8820% ( 54) 00:08:54.180 14775.389 - 14834.967: 85.3373% ( 44) 00:08:54.180 14834.967 - 14894.545: 85.7202% ( 37) 00:08:54.180 14894.545 - 14954.124: 86.1548% ( 42) 00:08:54.180 14954.124 - 15013.702: 86.5894% ( 42) 00:08:54.180 15013.702 - 15073.280: 87.0033% ( 40) 00:08:54.180 15073.280 - 15132.858: 87.4690% ( 45) 00:08:54.180 15132.858 - 15192.436: 87.9656% ( 48) 00:08:54.180 15192.436 - 15252.015: 88.4830% ( 50) 00:08:54.180 15252.015 - 15371.171: 89.4247% ( 91) 00:08:54.180 15371.171 - 15490.327: 90.1180% ( 67) 00:08:54.180 15490.327 - 15609.484: 90.7285% ( 59) 00:08:54.180 15609.484 - 15728.640: 91.4114% ( 66) 00:08:54.180 15728.640 - 15847.796: 92.1254% ( 69) 00:08:54.180 15847.796 - 15966.953: 92.8498% ( 70) 00:08:54.180 15966.953 - 16086.109: 93.5844% ( 71) 00:08:54.180 16086.109 - 16205.265: 94.2984% ( 69) 00:08:54.180 16205.265 - 16324.422: 94.9917% ( 67) 00:08:54.180 16324.422 - 16443.578: 95.5815% ( 57) 00:08:54.180 16443.578 - 16562.735: 96.1610% ( 56) 00:08:54.180 16562.735 - 16681.891: 96.6267% ( 45) 00:08:54.180 16681.891 - 16801.047: 97.0716% ( 43) 00:08:54.180 16801.047 - 16920.204: 97.4131% ( 33) 00:08:54.180 16920.204 - 17039.360: 97.7546% ( 33) 00:08:54.180 17039.360 - 17158.516: 98.0857% ( 32) 00:08:54.180 17158.516 - 17277.673: 98.3237% ( 23) 00:08:54.180 17277.673 - 17396.829: 98.4789% ( 15) 00:08:54.180 17396.829 - 17515.985: 98.5410% ( 6) 00:08:54.180 17515.985 - 17635.142: 98.5927% ( 5) 00:08:54.180 17635.142 - 17754.298: 98.6548% ( 6) 00:08:54.180 17754.298 - 17873.455: 98.6755% ( 2) 00:08:54.180 22997.178 - 23116.335: 98.7997% ( 12) 00:08:54.180 23116.335 - 23235.491: 98.8721% ( 7) 00:08:54.180 23235.491 - 23354.647: 98.8928% ( 2) 00:08:54.180 23354.647 - 23473.804: 98.9238% ( 3) 00:08:54.180 23473.804 - 23592.960: 98.9549% ( 3) 00:08:54.180 23592.960 - 23712.116: 98.9756% ( 2) 00:08:54.180 23712.116 - 23831.273: 99.0066% ( 3) 00:08:54.180 23831.273 - 23950.429: 99.0273% ( 2) 00:08:54.180 23950.429 - 24069.585: 99.0584% ( 3) 00:08:54.180 24069.585 - 24188.742: 99.0791% ( 2) 00:08:54.180 24188.742 - 24307.898: 99.0998% ( 2) 00:08:54.180 24307.898 - 24427.055: 99.1308% ( 3) 00:08:54.180 24427.055 - 24546.211: 99.1618% ( 3) 00:08:54.180 24546.211 - 24665.367: 99.1929% ( 3) 00:08:54.180 24665.367 - 24784.524: 99.2239% ( 3) 00:08:54.180 24784.524 - 24903.680: 99.2446% ( 2) 00:08:54.180 24903.680 - 25022.836: 99.2757% ( 3) 00:08:54.180 25022.836 - 25141.993: 99.3067% ( 3) 00:08:54.180 25141.993 - 25261.149: 99.3377% ( 3) 00:08:54.180 30027.404 - 30146.560: 99.3481% ( 1) 00:08:54.180 30146.560 - 30265.716: 99.3791% ( 3) 00:08:54.180 30265.716 - 30384.873: 99.3998% ( 2) 00:08:54.180 30384.873 - 30504.029: 99.4309% ( 3) 00:08:54.180 30504.029 - 30742.342: 99.4930% ( 6) 00:08:54.180 30742.342 - 30980.655: 99.5447% ( 5) 00:08:54.180 30980.655 - 31218.967: 99.6068% ( 6) 00:08:54.180 31218.967 - 31457.280: 99.6689% ( 6) 00:08:54.180 31457.280 - 31695.593: 99.7310% ( 6) 00:08:54.180 31695.593 - 31933.905: 99.7930% ( 6) 00:08:54.180 31933.905 - 32172.218: 99.8551% ( 6) 00:08:54.180 32172.218 - 32410.531: 99.9069% ( 5) 00:08:54.180 32410.531 - 32648.844: 99.9690% ( 6) 00:08:54.180 32648.844 - 32887.156: 100.0000% ( 3) 00:08:54.180 00:08:54.180 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:54.180 ============================================================================== 00:08:54.180 Range in us Cumulative IO count 00:08:54.180 9234.618 - 9294.196: 0.0724% ( 7) 00:08:54.180 9294.196 - 9353.775: 0.1345% ( 6) 00:08:54.180 9353.775 - 9413.353: 0.3001% ( 16) 00:08:54.180 9413.353 - 9472.931: 0.4656% ( 16) 00:08:54.180 9472.931 - 9532.509: 0.7347% ( 26) 00:08:54.180 9532.509 - 9592.087: 0.9313% ( 19) 00:08:54.180 9592.087 - 9651.665: 1.1175% ( 18) 00:08:54.180 9651.665 - 9711.244: 1.2831% ( 16) 00:08:54.180 9711.244 - 9770.822: 1.4383% ( 15) 00:08:54.180 9770.822 - 9830.400: 1.6349% ( 19) 00:08:54.180 9830.400 - 9889.978: 1.8315% ( 19) 00:08:54.180 9889.978 - 9949.556: 1.9868% ( 15) 00:08:54.180 9949.556 - 10009.135: 2.1627% ( 17) 00:08:54.180 10009.135 - 10068.713: 2.4834% ( 31) 00:08:54.180 10068.713 - 10128.291: 2.9698% ( 47) 00:08:54.180 10128.291 - 10187.869: 3.6010% ( 61) 00:08:54.180 10187.869 - 10247.447: 4.2219% ( 60) 00:08:54.180 10247.447 - 10307.025: 4.7599% ( 52) 00:08:54.180 10307.025 - 10366.604: 5.1531% ( 38) 00:08:54.180 10366.604 - 10426.182: 5.5877% ( 42) 00:08:54.180 10426.182 - 10485.760: 6.0741% ( 47) 00:08:54.180 10485.760 - 10545.338: 6.4983% ( 41) 00:08:54.180 10545.338 - 10604.916: 7.0985% ( 58) 00:08:54.180 10604.916 - 10664.495: 7.7918% ( 67) 00:08:54.180 10664.495 - 10724.073: 8.5472% ( 73) 00:08:54.180 10724.073 - 10783.651: 9.2819% ( 71) 00:08:54.180 10783.651 - 10843.229: 9.9959% ( 69) 00:08:54.180 10843.229 - 10902.807: 10.7616% ( 74) 00:08:54.180 10902.807 - 10962.385: 11.6101% ( 82) 00:08:54.180 10962.385 - 11021.964: 12.5103% ( 87) 00:08:54.180 11021.964 - 11081.542: 13.3382% ( 80) 00:08:54.180 11081.542 - 11141.120: 14.2798% ( 91) 00:08:54.180 11141.120 - 11200.698: 15.3560% ( 104) 00:08:54.180 11200.698 - 11260.276: 16.3804% ( 99) 00:08:54.180 11260.276 - 11319.855: 17.5393% ( 112) 00:08:54.180 11319.855 - 11379.433: 18.9052% ( 132) 00:08:54.181 11379.433 - 11439.011: 20.2297% ( 128) 00:08:54.181 11439.011 - 11498.589: 21.6474% ( 137) 00:08:54.181 11498.589 - 11558.167: 22.9822% ( 129) 00:08:54.181 11558.167 - 11617.745: 24.2860% ( 126) 00:08:54.181 11617.745 - 11677.324: 25.5795% ( 125) 00:08:54.181 11677.324 - 11736.902: 27.1316% ( 150) 00:08:54.181 11736.902 - 11796.480: 28.7148% ( 153) 00:08:54.181 11796.480 - 11856.058: 29.8738% ( 112) 00:08:54.181 11856.058 - 11915.636: 31.1465% ( 123) 00:08:54.181 11915.636 - 11975.215: 32.4296% ( 124) 00:08:54.181 11975.215 - 12034.793: 33.7127% ( 124) 00:08:54.181 12034.793 - 12094.371: 34.8613% ( 111) 00:08:54.181 12094.371 - 12153.949: 36.1445% ( 124) 00:08:54.181 12153.949 - 12213.527: 37.4483% ( 126) 00:08:54.181 12213.527 - 12273.105: 38.7107% ( 122) 00:08:54.181 12273.105 - 12332.684: 40.0352% ( 128) 00:08:54.181 12332.684 - 12392.262: 41.3493% ( 127) 00:08:54.181 12392.262 - 12451.840: 42.5600% ( 117) 00:08:54.181 12451.840 - 12511.418: 43.7603% ( 116) 00:08:54.181 12511.418 - 12570.996: 44.8779% ( 108) 00:08:54.181 12570.996 - 12630.575: 45.9023% ( 99) 00:08:54.181 12630.575 - 12690.153: 46.8129% ( 88) 00:08:54.181 12690.153 - 12749.731: 47.6200% ( 78) 00:08:54.181 12749.731 - 12809.309: 48.5617% ( 91) 00:08:54.181 12809.309 - 12868.887: 49.6689% ( 107) 00:08:54.181 12868.887 - 12928.465: 50.6519% ( 95) 00:08:54.181 12928.465 - 12988.044: 51.6660% ( 98) 00:08:54.181 12988.044 - 13047.622: 52.6904% ( 99) 00:08:54.181 13047.622 - 13107.200: 53.8286% ( 110) 00:08:54.181 13107.200 - 13166.778: 54.9151% ( 105) 00:08:54.181 13166.778 - 13226.356: 55.9810% ( 103) 00:08:54.181 13226.356 - 13285.935: 57.0778% ( 106) 00:08:54.181 13285.935 - 13345.513: 58.3920% ( 127) 00:08:54.181 13345.513 - 13405.091: 59.7372% ( 130) 00:08:54.181 13405.091 - 13464.669: 61.1341% ( 135) 00:08:54.181 13464.669 - 13524.247: 62.4793% ( 130) 00:08:54.181 13524.247 - 13583.825: 63.7831% ( 126) 00:08:54.181 13583.825 - 13643.404: 65.1283% ( 130) 00:08:54.181 13643.404 - 13702.982: 66.4425% ( 127) 00:08:54.181 13702.982 - 13762.560: 67.8394% ( 135) 00:08:54.181 13762.560 - 13822.138: 69.1950% ( 131) 00:08:54.181 13822.138 - 13881.716: 70.5195% ( 128) 00:08:54.181 13881.716 - 13941.295: 71.9992% ( 143) 00:08:54.181 13941.295 - 14000.873: 73.3237% ( 128) 00:08:54.181 14000.873 - 14060.451: 74.7103% ( 134) 00:08:54.181 14060.451 - 14120.029: 75.9934% ( 124) 00:08:54.181 14120.029 - 14179.607: 77.2351% ( 120) 00:08:54.181 14179.607 - 14239.185: 78.3630% ( 109) 00:08:54.181 14239.185 - 14298.764: 79.3357% ( 94) 00:08:54.181 14298.764 - 14358.342: 80.3704% ( 100) 00:08:54.181 14358.342 - 14417.920: 81.2914% ( 89) 00:08:54.181 14417.920 - 14477.498: 82.2227% ( 90) 00:08:54.181 14477.498 - 14537.076: 83.1022% ( 85) 00:08:54.181 14537.076 - 14596.655: 83.8990% ( 77) 00:08:54.181 14596.655 - 14656.233: 84.6337% ( 71) 00:08:54.181 14656.233 - 14715.811: 85.2339% ( 58) 00:08:54.181 14715.811 - 14775.389: 85.8133% ( 56) 00:08:54.181 14775.389 - 14834.967: 86.3204% ( 49) 00:08:54.181 14834.967 - 14894.545: 86.7757% ( 44) 00:08:54.181 14894.545 - 14954.124: 87.1896% ( 40) 00:08:54.181 14954.124 - 15013.702: 87.7070% ( 50) 00:08:54.181 15013.702 - 15073.280: 88.1726% ( 45) 00:08:54.181 15073.280 - 15132.858: 88.6900% ( 50) 00:08:54.181 15132.858 - 15192.436: 89.1556% ( 45) 00:08:54.181 15192.436 - 15252.015: 89.6109% ( 44) 00:08:54.181 15252.015 - 15371.171: 90.4594% ( 82) 00:08:54.181 15371.171 - 15490.327: 91.0906% ( 61) 00:08:54.181 15490.327 - 15609.484: 91.6287% ( 52) 00:08:54.181 15609.484 - 15728.640: 92.2496% ( 60) 00:08:54.181 15728.640 - 15847.796: 92.9222% ( 65) 00:08:54.181 15847.796 - 15966.953: 93.3775% ( 44) 00:08:54.181 15966.953 - 16086.109: 93.8328% ( 44) 00:08:54.181 16086.109 - 16205.265: 94.1846% ( 34) 00:08:54.181 16205.265 - 16324.422: 94.6399% ( 44) 00:08:54.181 16324.422 - 16443.578: 95.0952% ( 44) 00:08:54.181 16443.578 - 16562.735: 95.5401% ( 43) 00:08:54.181 16562.735 - 16681.891: 95.9541% ( 40) 00:08:54.181 16681.891 - 16801.047: 96.2438% ( 28) 00:08:54.181 16801.047 - 16920.204: 96.4611% ( 21) 00:08:54.181 16920.204 - 17039.360: 96.6267% ( 16) 00:08:54.181 17039.360 - 17158.516: 96.8336% ( 20) 00:08:54.181 17158.516 - 17277.673: 97.0716% ( 23) 00:08:54.181 17277.673 - 17396.829: 97.2682% ( 19) 00:08:54.181 17396.829 - 17515.985: 97.3406% ( 7) 00:08:54.181 17515.985 - 17635.142: 97.3613% ( 2) 00:08:54.181 17635.142 - 17754.298: 97.4648% ( 10) 00:08:54.181 17754.298 - 17873.455: 97.5373% ( 7) 00:08:54.181 17873.455 - 17992.611: 97.6200% ( 8) 00:08:54.181 17992.611 - 18111.767: 97.7132% ( 9) 00:08:54.181 18111.767 - 18230.924: 97.7649% ( 5) 00:08:54.181 18230.924 - 18350.080: 97.8477% ( 8) 00:08:54.181 18350.080 - 18469.236: 97.9822% ( 13) 00:08:54.181 18469.236 - 18588.393: 98.1064% ( 12) 00:08:54.181 18588.393 - 18707.549: 98.2099% ( 10) 00:08:54.181 18707.549 - 18826.705: 98.2719% ( 6) 00:08:54.181 18826.705 - 18945.862: 98.3237% ( 5) 00:08:54.181 18945.862 - 19065.018: 98.3858% ( 6) 00:08:54.181 19065.018 - 19184.175: 98.4478% ( 6) 00:08:54.181 19184.175 - 19303.331: 98.4996% ( 5) 00:08:54.181 19303.331 - 19422.487: 98.5410% ( 4) 00:08:54.181 19422.487 - 19541.644: 98.5927% ( 5) 00:08:54.181 19541.644 - 19660.800: 98.6651% ( 7) 00:08:54.181 19660.800 - 19779.956: 98.6755% ( 1) 00:08:54.181 19899.113 - 20018.269: 98.8204% ( 14) 00:08:54.181 20018.269 - 20137.425: 98.8514% ( 3) 00:08:54.181 20137.425 - 20256.582: 98.8825% ( 3) 00:08:54.181 20256.582 - 20375.738: 98.9031% ( 2) 00:08:54.181 20375.738 - 20494.895: 98.9342% ( 3) 00:08:54.181 20494.895 - 20614.051: 98.9652% ( 3) 00:08:54.181 20614.051 - 20733.207: 98.9859% ( 2) 00:08:54.181 20733.207 - 20852.364: 99.0170% ( 3) 00:08:54.181 20852.364 - 20971.520: 99.0377% ( 2) 00:08:54.181 20971.520 - 21090.676: 99.0687% ( 3) 00:08:54.181 21090.676 - 21209.833: 99.0894% ( 2) 00:08:54.181 21209.833 - 21328.989: 99.1204% ( 3) 00:08:54.181 21328.989 - 21448.145: 99.1515% ( 3) 00:08:54.181 21448.145 - 21567.302: 99.1825% ( 3) 00:08:54.181 21567.302 - 21686.458: 99.2136% ( 3) 00:08:54.181 21686.458 - 21805.615: 99.2446% ( 3) 00:08:54.181 21805.615 - 21924.771: 99.2757% ( 3) 00:08:54.181 21924.771 - 22043.927: 99.3067% ( 3) 00:08:54.181 22043.927 - 22163.084: 99.3377% ( 3) 00:08:54.181 27048.495 - 27167.651: 99.3481% ( 1) 00:08:54.181 27167.651 - 27286.807: 99.3688% ( 2) 00:08:54.181 27286.807 - 27405.964: 99.3998% ( 3) 00:08:54.181 27405.964 - 27525.120: 99.4309% ( 3) 00:08:54.181 27525.120 - 27644.276: 99.4619% ( 3) 00:08:54.181 27644.276 - 27763.433: 99.4930% ( 3) 00:08:54.181 27763.433 - 27882.589: 99.5240% ( 3) 00:08:54.181 27882.589 - 28001.745: 99.5550% ( 3) 00:08:54.181 28001.745 - 28120.902: 99.5861% ( 3) 00:08:54.181 28120.902 - 28240.058: 99.6171% ( 3) 00:08:54.181 28240.058 - 28359.215: 99.6482% ( 3) 00:08:54.181 28359.215 - 28478.371: 99.6792% ( 3) 00:08:54.181 28478.371 - 28597.527: 99.7103% ( 3) 00:08:54.181 28597.527 - 28716.684: 99.7413% ( 3) 00:08:54.181 28716.684 - 28835.840: 99.7724% ( 3) 00:08:54.181 28835.840 - 28954.996: 99.8034% ( 3) 00:08:54.181 28954.996 - 29074.153: 99.8344% ( 3) 00:08:54.181 29074.153 - 29193.309: 99.8655% ( 3) 00:08:54.181 29193.309 - 29312.465: 99.8965% ( 3) 00:08:54.181 29312.465 - 29431.622: 99.9276% ( 3) 00:08:54.181 29431.622 - 29550.778: 99.9586% ( 3) 00:08:54.181 29550.778 - 29669.935: 99.9897% ( 3) 00:08:54.181 29669.935 - 29789.091: 100.0000% ( 1) 00:08:54.181 00:08:54.181 13:04:46 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:54.181 00:08:54.181 real 0m2.690s 00:08:54.181 user 0m2.279s 00:08:54.181 sys 0m0.299s 00:08:54.181 13:04:46 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:54.181 13:04:46 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:54.181 ************************************ 00:08:54.181 END TEST nvme_perf 00:08:54.181 ************************************ 00:08:54.181 13:04:46 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:54.181 13:04:46 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:54.181 13:04:46 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:54.181 13:04:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:54.181 ************************************ 00:08:54.181 START TEST nvme_hello_world 00:08:54.181 ************************************ 00:08:54.181 13:04:46 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:54.440 Initializing NVMe Controllers 00:08:54.440 Attached to 0000:00:10.0 00:08:54.440 Namespace ID: 1 size: 6GB 00:08:54.440 Attached to 0000:00:11.0 00:08:54.440 Namespace ID: 1 size: 5GB 00:08:54.440 Attached to 0000:00:13.0 00:08:54.440 Namespace ID: 1 size: 1GB 00:08:54.440 Attached to 0000:00:12.0 00:08:54.440 Namespace ID: 1 size: 4GB 00:08:54.440 Namespace ID: 2 size: 4GB 00:08:54.440 Namespace ID: 3 size: 4GB 00:08:54.440 Initialization complete. 00:08:54.440 INFO: using host memory buffer for IO 00:08:54.440 Hello world! 00:08:54.440 INFO: using host memory buffer for IO 00:08:54.440 Hello world! 00:08:54.440 INFO: using host memory buffer for IO 00:08:54.440 Hello world! 00:08:54.440 INFO: using host memory buffer for IO 00:08:54.440 Hello world! 00:08:54.440 INFO: using host memory buffer for IO 00:08:54.440 Hello world! 00:08:54.440 INFO: using host memory buffer for IO 00:08:54.440 Hello world! 00:08:54.440 00:08:54.440 real 0m0.277s 00:08:54.440 user 0m0.113s 00:08:54.440 sys 0m0.119s 00:08:54.440 13:04:46 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:54.440 ************************************ 00:08:54.440 END TEST nvme_hello_world 00:08:54.440 ************************************ 00:08:54.440 13:04:46 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:54.440 13:04:46 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:54.440 13:04:46 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:54.440 13:04:46 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:54.440 13:04:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:54.440 ************************************ 00:08:54.440 START TEST nvme_sgl 00:08:54.440 ************************************ 00:08:54.440 13:04:46 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:54.698 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:54.698 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:54.698 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:54.698 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:54.698 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:54.698 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:54.698 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:54.698 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:54.698 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:54.698 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:54.698 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:54.698 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:54.698 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:54.698 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:54.698 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:54.698 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:54.698 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:54.698 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:54.698 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:54.698 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:54.698 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:54.698 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:54.698 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:54.698 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:54.698 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:54.698 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:54.698 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:54.698 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:54.698 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:54.698 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:54.698 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:54.698 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:54.698 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:54.698 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:54.698 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:54.698 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:54.698 NVMe Readv/Writev Request test 00:08:54.698 Attached to 0000:00:10.0 00:08:54.698 Attached to 0000:00:11.0 00:08:54.698 Attached to 0000:00:13.0 00:08:54.698 Attached to 0000:00:12.0 00:08:54.698 0000:00:10.0: build_io_request_2 test passed 00:08:54.698 0000:00:10.0: build_io_request_4 test passed 00:08:54.698 0000:00:10.0: build_io_request_5 test passed 00:08:54.698 0000:00:10.0: build_io_request_6 test passed 00:08:54.698 0000:00:10.0: build_io_request_7 test passed 00:08:54.698 0000:00:10.0: build_io_request_10 test passed 00:08:54.698 0000:00:11.0: build_io_request_2 test passed 00:08:54.698 0000:00:11.0: build_io_request_4 test passed 00:08:54.698 0000:00:11.0: build_io_request_5 test passed 00:08:54.698 0000:00:11.0: build_io_request_6 test passed 00:08:54.698 0000:00:11.0: build_io_request_7 test passed 00:08:54.698 0000:00:11.0: build_io_request_10 test passed 00:08:54.698 Cleaning up... 00:08:54.698 00:08:54.698 real 0m0.360s 00:08:54.698 user 0m0.201s 00:08:54.698 sys 0m0.114s 00:08:54.698 13:04:46 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:54.698 ************************************ 00:08:54.698 END TEST nvme_sgl 00:08:54.698 ************************************ 00:08:54.698 13:04:46 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:54.698 13:04:46 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:54.698 13:04:46 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:54.698 13:04:46 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:54.698 13:04:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:54.698 ************************************ 00:08:54.698 START TEST nvme_e2edp 00:08:54.698 ************************************ 00:08:54.698 13:04:46 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:55.265 NVMe Write/Read with End-to-End data protection test 00:08:55.265 Attached to 0000:00:10.0 00:08:55.265 Attached to 0000:00:11.0 00:08:55.265 Attached to 0000:00:13.0 00:08:55.265 Attached to 0000:00:12.0 00:08:55.265 Cleaning up... 00:08:55.265 00:08:55.265 real 0m0.284s 00:08:55.265 user 0m0.104s 00:08:55.265 sys 0m0.137s 00:08:55.265 13:04:47 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.265 ************************************ 00:08:55.265 END TEST nvme_e2edp 00:08:55.265 ************************************ 00:08:55.265 13:04:47 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:55.265 13:04:47 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:55.265 13:04:47 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:55.265 13:04:47 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.265 13:04:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:55.265 ************************************ 00:08:55.265 START TEST nvme_reserve 00:08:55.265 ************************************ 00:08:55.265 13:04:47 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:55.524 ===================================================== 00:08:55.524 NVMe Controller at PCI bus 0, device 16, function 0 00:08:55.524 ===================================================== 00:08:55.524 Reservations: Not Supported 00:08:55.524 ===================================================== 00:08:55.524 NVMe Controller at PCI bus 0, device 17, function 0 00:08:55.524 ===================================================== 00:08:55.524 Reservations: Not Supported 00:08:55.524 ===================================================== 00:08:55.524 NVMe Controller at PCI bus 0, device 19, function 0 00:08:55.524 ===================================================== 00:08:55.524 Reservations: Not Supported 00:08:55.524 ===================================================== 00:08:55.524 NVMe Controller at PCI bus 0, device 18, function 0 00:08:55.524 ===================================================== 00:08:55.524 Reservations: Not Supported 00:08:55.524 Reservation test passed 00:08:55.524 00:08:55.524 real 0m0.335s 00:08:55.524 user 0m0.116s 00:08:55.524 sys 0m0.171s 00:08:55.524 13:04:47 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.524 13:04:47 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:55.524 ************************************ 00:08:55.524 END TEST nvme_reserve 00:08:55.524 ************************************ 00:08:55.524 13:04:47 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:55.524 13:04:47 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:55.524 13:04:47 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.524 13:04:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:55.524 ************************************ 00:08:55.524 START TEST nvme_err_injection 00:08:55.524 ************************************ 00:08:55.524 13:04:47 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:55.782 NVMe Error Injection test 00:08:55.782 Attached to 0000:00:10.0 00:08:55.782 Attached to 0000:00:11.0 00:08:55.782 Attached to 0000:00:13.0 00:08:55.782 Attached to 0000:00:12.0 00:08:55.782 0000:00:10.0: get features failed as expected 00:08:55.782 0000:00:11.0: get features failed as expected 00:08:55.782 0000:00:13.0: get features failed as expected 00:08:55.782 0000:00:12.0: get features failed as expected 00:08:55.782 0000:00:10.0: get features successfully as expected 00:08:55.782 0000:00:11.0: get features successfully as expected 00:08:55.782 0000:00:13.0: get features successfully as expected 00:08:55.782 0000:00:12.0: get features successfully as expected 00:08:55.782 0000:00:10.0: read failed as expected 00:08:55.782 0000:00:11.0: read failed as expected 00:08:55.782 0000:00:13.0: read failed as expected 00:08:55.782 0000:00:12.0: read failed as expected 00:08:55.782 0000:00:10.0: read successfully as expected 00:08:55.782 0000:00:11.0: read successfully as expected 00:08:55.782 0000:00:13.0: read successfully as expected 00:08:55.782 0000:00:12.0: read successfully as expected 00:08:55.782 Cleaning up... 00:08:55.782 00:08:55.782 real 0m0.308s 00:08:55.782 user 0m0.111s 00:08:55.782 sys 0m0.154s 00:08:55.782 13:04:47 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.782 ************************************ 00:08:55.782 END TEST nvme_err_injection 00:08:55.782 ************************************ 00:08:55.782 13:04:47 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:55.782 13:04:47 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:55.782 13:04:47 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:08:55.782 13:04:47 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.782 13:04:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:55.782 ************************************ 00:08:55.782 START TEST nvme_overhead 00:08:55.782 ************************************ 00:08:55.782 13:04:47 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:57.158 Initializing NVMe Controllers 00:08:57.158 Attached to 0000:00:10.0 00:08:57.158 Attached to 0000:00:11.0 00:08:57.158 Attached to 0000:00:13.0 00:08:57.158 Attached to 0000:00:12.0 00:08:57.158 Initialization complete. Launching workers. 00:08:57.158 submit (in ns) avg, min, max = 15628.8, 13012.3, 88806.8 00:08:57.158 complete (in ns) avg, min, max = 10333.7, 9049.5, 101353.2 00:08:57.158 00:08:57.158 Submit histogram 00:08:57.158 ================ 00:08:57.158 Range in us Cumulative Count 00:08:57.158 12.975 - 13.033: 0.0095% ( 1) 00:08:57.158 13.905 - 13.964: 0.0380% ( 3) 00:08:57.158 13.964 - 14.022: 0.0760% ( 4) 00:08:57.158 14.022 - 14.080: 0.1046% ( 3) 00:08:57.158 14.080 - 14.138: 0.1806% ( 8) 00:08:57.158 14.138 - 14.196: 0.2852% ( 11) 00:08:57.158 14.196 - 14.255: 0.3232% ( 4) 00:08:57.158 14.255 - 14.313: 0.3517% ( 3) 00:08:57.158 14.313 - 14.371: 0.4183% ( 7) 00:08:57.158 14.371 - 14.429: 0.6939% ( 29) 00:08:57.158 14.429 - 14.487: 0.9601% ( 28) 00:08:57.158 14.487 - 14.545: 1.3118% ( 37) 00:08:57.158 14.545 - 14.604: 1.7110% ( 42) 00:08:57.158 14.604 - 14.662: 2.3574% ( 68) 00:08:57.158 14.662 - 14.720: 3.6977% ( 141) 00:08:57.158 14.720 - 14.778: 7.0247% ( 350) 00:08:57.158 14.778 - 14.836: 13.4125% ( 672) 00:08:57.158 14.836 - 14.895: 24.4962% ( 1166) 00:08:57.158 14.895 - 15.011: 50.4373% ( 2729) 00:08:57.158 15.011 - 15.127: 65.9601% ( 1633) 00:08:57.158 15.127 - 15.244: 73.7928% ( 824) 00:08:57.158 15.244 - 15.360: 78.5837% ( 504) 00:08:57.158 15.360 - 15.476: 81.5114% ( 308) 00:08:57.158 15.476 - 15.593: 83.7452% ( 235) 00:08:57.158 15.593 - 15.709: 85.7129% ( 207) 00:08:57.158 15.709 - 15.825: 87.1198% ( 148) 00:08:57.158 15.825 - 15.942: 88.3270% ( 127) 00:08:57.158 15.942 - 16.058: 89.4297% ( 116) 00:08:57.158 16.058 - 16.175: 90.5418% ( 117) 00:08:57.158 16.175 - 16.291: 91.5019% ( 101) 00:08:57.158 16.291 - 16.407: 92.2243% ( 76) 00:08:57.158 16.407 - 16.524: 92.8042% ( 61) 00:08:57.158 16.524 - 16.640: 93.1179% ( 33) 00:08:57.158 16.640 - 16.756: 93.3935% ( 29) 00:08:57.158 16.756 - 16.873: 93.6692% ( 29) 00:08:57.158 16.873 - 16.989: 93.8118% ( 15) 00:08:57.158 16.989 - 17.105: 93.9639% ( 16) 00:08:57.158 17.105 - 17.222: 94.1065% ( 15) 00:08:57.158 17.222 - 17.338: 94.2395% ( 14) 00:08:57.158 17.338 - 17.455: 94.2776% ( 4) 00:08:57.158 17.455 - 17.571: 94.2966% ( 2) 00:08:57.158 17.571 - 17.687: 94.3346% ( 4) 00:08:57.158 17.687 - 17.804: 94.3631% ( 3) 00:08:57.158 17.804 - 17.920: 94.4011% ( 4) 00:08:57.158 17.920 - 18.036: 94.4297% ( 3) 00:08:57.158 18.036 - 18.153: 94.4487% ( 2) 00:08:57.158 18.153 - 18.269: 94.4772% ( 3) 00:08:57.158 18.269 - 18.385: 94.5057% ( 3) 00:08:57.158 18.385 - 18.502: 94.5152% ( 1) 00:08:57.158 18.502 - 18.618: 94.5532% ( 4) 00:08:57.158 18.618 - 18.735: 94.5722% ( 2) 00:08:57.158 18.735 - 18.851: 94.5913% ( 2) 00:08:57.158 18.851 - 18.967: 94.6198% ( 3) 00:08:57.158 18.967 - 19.084: 94.6483% ( 3) 00:08:57.158 19.084 - 19.200: 94.6578% ( 1) 00:08:57.158 19.316 - 19.433: 94.6768% ( 2) 00:08:57.158 19.549 - 19.665: 94.7053% ( 3) 00:08:57.158 19.665 - 19.782: 94.7529% ( 5) 00:08:57.158 19.782 - 19.898: 94.7909% ( 4) 00:08:57.158 19.898 - 20.015: 94.8004% ( 1) 00:08:57.158 20.015 - 20.131: 94.8574% ( 6) 00:08:57.158 20.131 - 20.247: 94.8954% ( 4) 00:08:57.158 20.247 - 20.364: 94.9905% ( 10) 00:08:57.158 20.364 - 20.480: 95.1046% ( 12) 00:08:57.158 20.480 - 20.596: 95.2091% ( 11) 00:08:57.158 20.596 - 20.713: 95.3327% ( 13) 00:08:57.158 20.713 - 20.829: 95.5894% ( 27) 00:08:57.158 20.829 - 20.945: 95.7795% ( 20) 00:08:57.158 20.945 - 21.062: 95.9221% ( 15) 00:08:57.158 21.062 - 21.178: 96.0551% ( 14) 00:08:57.158 21.178 - 21.295: 96.2072% ( 16) 00:08:57.158 21.295 - 21.411: 96.3023% ( 10) 00:08:57.158 21.411 - 21.527: 96.4259% ( 13) 00:08:57.158 21.527 - 21.644: 96.5019% ( 8) 00:08:57.158 21.644 - 21.760: 96.6255% ( 13) 00:08:57.158 21.760 - 21.876: 96.6920% ( 7) 00:08:57.158 21.876 - 21.993: 96.7776% ( 9) 00:08:57.158 21.993 - 22.109: 96.8536% ( 8) 00:08:57.158 22.109 - 22.225: 96.9297% ( 8) 00:08:57.158 22.225 - 22.342: 96.9677% ( 4) 00:08:57.158 22.342 - 22.458: 97.0627% ( 10) 00:08:57.158 22.458 - 22.575: 97.1578% ( 10) 00:08:57.158 22.575 - 22.691: 97.2624% ( 11) 00:08:57.158 22.691 - 22.807: 97.2909% ( 3) 00:08:57.158 22.807 - 22.924: 97.3384% ( 5) 00:08:57.158 22.924 - 23.040: 97.3764% ( 4) 00:08:57.158 23.040 - 23.156: 97.4240% ( 5) 00:08:57.158 23.156 - 23.273: 97.4525% ( 3) 00:08:57.158 23.273 - 23.389: 97.5190% ( 7) 00:08:57.158 23.389 - 23.505: 97.5475% ( 3) 00:08:57.158 23.505 - 23.622: 97.5760% ( 3) 00:08:57.158 23.622 - 23.738: 97.6806% ( 11) 00:08:57.158 23.738 - 23.855: 97.7852% ( 11) 00:08:57.158 23.855 - 23.971: 97.8992% ( 12) 00:08:57.158 23.971 - 24.087: 97.9943% ( 10) 00:08:57.158 24.087 - 24.204: 98.0513% ( 6) 00:08:57.158 24.204 - 24.320: 98.1369% ( 9) 00:08:57.158 24.320 - 24.436: 98.1844% ( 5) 00:08:57.158 24.436 - 24.553: 98.2224% ( 4) 00:08:57.158 24.553 - 24.669: 98.2890% ( 7) 00:08:57.158 24.669 - 24.785: 98.3555% ( 7) 00:08:57.158 24.785 - 24.902: 98.4316% ( 8) 00:08:57.158 24.902 - 25.018: 98.4791% ( 5) 00:08:57.158 25.018 - 25.135: 98.4981% ( 2) 00:08:57.158 25.135 - 25.251: 98.5361% ( 4) 00:08:57.158 25.251 - 25.367: 98.5741% ( 4) 00:08:57.158 25.367 - 25.484: 98.6312% ( 6) 00:08:57.158 25.484 - 25.600: 98.6407% ( 1) 00:08:57.158 25.600 - 25.716: 98.6502% ( 1) 00:08:57.158 25.716 - 25.833: 98.7072% ( 6) 00:08:57.158 25.833 - 25.949: 98.7357% ( 3) 00:08:57.158 25.949 - 26.065: 98.7738% ( 4) 00:08:57.158 26.065 - 26.182: 98.8023% ( 3) 00:08:57.158 26.182 - 26.298: 98.8308% ( 3) 00:08:57.158 26.298 - 26.415: 98.8593% ( 3) 00:08:57.158 26.415 - 26.531: 98.9259% ( 7) 00:08:57.159 26.531 - 26.647: 98.9734% ( 5) 00:08:57.159 26.647 - 26.764: 98.9924% ( 2) 00:08:57.159 26.764 - 26.880: 99.0114% ( 2) 00:08:57.159 26.880 - 26.996: 99.0209% ( 1) 00:08:57.159 26.996 - 27.113: 99.0304% ( 1) 00:08:57.159 27.113 - 27.229: 99.0494% ( 2) 00:08:57.159 27.229 - 27.345: 99.0684% ( 2) 00:08:57.159 27.345 - 27.462: 99.1065% ( 4) 00:08:57.159 27.462 - 27.578: 99.1255% ( 2) 00:08:57.159 27.578 - 27.695: 99.1350% ( 1) 00:08:57.159 27.695 - 27.811: 99.1445% ( 1) 00:08:57.159 27.811 - 27.927: 99.1635% ( 2) 00:08:57.159 27.927 - 28.044: 99.1730% ( 1) 00:08:57.159 28.276 - 28.393: 99.1920% ( 2) 00:08:57.159 28.393 - 28.509: 99.2015% ( 1) 00:08:57.159 28.509 - 28.625: 99.2205% ( 2) 00:08:57.159 28.625 - 28.742: 99.2300% ( 1) 00:08:57.159 28.742 - 28.858: 99.2586% ( 3) 00:08:57.159 28.858 - 28.975: 99.2776% ( 2) 00:08:57.159 28.975 - 29.091: 99.3156% ( 4) 00:08:57.159 29.091 - 29.207: 99.3631% ( 5) 00:08:57.159 29.324 - 29.440: 99.3726% ( 1) 00:08:57.159 29.440 - 29.556: 99.3916% ( 2) 00:08:57.159 29.556 - 29.673: 99.4297% ( 4) 00:08:57.159 29.673 - 29.789: 99.4582% ( 3) 00:08:57.159 29.789 - 30.022: 99.5057% ( 5) 00:08:57.159 30.022 - 30.255: 99.5627% ( 6) 00:08:57.159 30.255 - 30.487: 99.6103% ( 5) 00:08:57.159 30.487 - 30.720: 99.6198% ( 1) 00:08:57.159 30.720 - 30.953: 99.6483% ( 3) 00:08:57.159 30.953 - 31.185: 99.6673% ( 2) 00:08:57.159 31.185 - 31.418: 99.6863% ( 2) 00:08:57.159 31.418 - 31.651: 99.7053% ( 2) 00:08:57.159 31.651 - 31.884: 99.7148% ( 1) 00:08:57.159 32.116 - 32.349: 99.7243% ( 1) 00:08:57.159 32.349 - 32.582: 99.7433% ( 2) 00:08:57.159 32.582 - 32.815: 99.7814% ( 4) 00:08:57.159 33.047 - 33.280: 99.8004% ( 2) 00:08:57.159 33.745 - 33.978: 99.8099% ( 1) 00:08:57.159 34.444 - 34.676: 99.8194% ( 1) 00:08:57.159 34.909 - 35.142: 99.8384% ( 2) 00:08:57.159 35.142 - 35.375: 99.8479% ( 1) 00:08:57.159 36.771 - 37.004: 99.8574% ( 1) 00:08:57.159 37.469 - 37.702: 99.8669% ( 1) 00:08:57.159 38.167 - 38.400: 99.8764% ( 1) 00:08:57.159 39.796 - 40.029: 99.8859% ( 1) 00:08:57.159 40.262 - 40.495: 99.8954% ( 1) 00:08:57.159 41.193 - 41.425: 99.9144% ( 2) 00:08:57.159 41.425 - 41.658: 99.9240% ( 1) 00:08:57.159 43.520 - 43.753: 99.9335% ( 1) 00:08:57.159 43.753 - 43.985: 99.9430% ( 1) 00:08:57.159 44.218 - 44.451: 99.9525% ( 1) 00:08:57.159 45.382 - 45.615: 99.9620% ( 1) 00:08:57.159 46.313 - 46.545: 99.9715% ( 1) 00:08:57.159 53.295 - 53.527: 99.9810% ( 1) 00:08:57.159 85.178 - 85.644: 99.9905% ( 1) 00:08:57.159 88.436 - 88.902: 100.0000% ( 1) 00:08:57.159 00:08:57.159 Complete histogram 00:08:57.159 ================== 00:08:57.159 Range in us Cumulative Count 00:08:57.159 9.018 - 9.076: 0.0475% ( 5) 00:08:57.159 9.076 - 9.135: 0.0951% ( 5) 00:08:57.159 9.135 - 9.193: 0.1141% ( 2) 00:08:57.159 9.193 - 9.251: 0.1901% ( 8) 00:08:57.159 9.251 - 9.309: 0.2852% ( 10) 00:08:57.159 9.309 - 9.367: 0.4563% ( 18) 00:08:57.159 9.367 - 9.425: 0.7129% ( 27) 00:08:57.159 9.425 - 9.484: 0.9125% ( 21) 00:08:57.159 9.484 - 9.542: 1.5875% ( 71) 00:08:57.159 9.542 - 9.600: 5.1521% ( 375) 00:08:57.159 9.600 - 9.658: 15.5038% ( 1089) 00:08:57.159 9.658 - 9.716: 32.3574% ( 1773) 00:08:57.159 9.716 - 9.775: 50.2567% ( 1883) 00:08:57.159 9.775 - 9.833: 64.6958% ( 1519) 00:08:57.159 9.833 - 9.891: 73.1464% ( 889) 00:08:57.159 9.891 - 9.949: 78.6027% ( 574) 00:08:57.159 9.949 - 10.007: 81.3308% ( 287) 00:08:57.159 10.007 - 10.065: 82.8707% ( 162) 00:08:57.159 10.065 - 10.124: 83.6692% ( 84) 00:08:57.159 10.124 - 10.182: 84.0684% ( 42) 00:08:57.159 10.182 - 10.240: 84.3441% ( 29) 00:08:57.159 10.240 - 10.298: 84.4677% ( 13) 00:08:57.159 10.298 - 10.356: 84.6483% ( 19) 00:08:57.159 10.356 - 10.415: 84.7909% ( 15) 00:08:57.159 10.415 - 10.473: 85.0570% ( 28) 00:08:57.159 10.473 - 10.531: 85.4087% ( 37) 00:08:57.159 10.531 - 10.589: 85.7700% ( 38) 00:08:57.159 10.589 - 10.647: 86.1122% ( 36) 00:08:57.159 10.647 - 10.705: 86.7966% ( 72) 00:08:57.159 10.705 - 10.764: 87.5000% ( 74) 00:08:57.159 10.764 - 10.822: 88.3365% ( 88) 00:08:57.159 10.822 - 10.880: 89.2110% ( 92) 00:08:57.159 10.880 - 10.938: 89.9049% ( 73) 00:08:57.159 10.938 - 10.996: 90.3517% ( 47) 00:08:57.159 10.996 - 11.055: 90.8270% ( 50) 00:08:57.159 11.055 - 11.113: 91.2167% ( 41) 00:08:57.159 11.113 - 11.171: 91.3783% ( 17) 00:08:57.159 11.171 - 11.229: 91.5494% ( 18) 00:08:57.159 11.229 - 11.287: 91.6540% ( 11) 00:08:57.159 11.287 - 11.345: 91.7681% ( 12) 00:08:57.159 11.345 - 11.404: 91.8631% ( 10) 00:08:57.159 11.404 - 11.462: 92.0437% ( 19) 00:08:57.159 11.462 - 11.520: 92.1008% ( 6) 00:08:57.159 11.520 - 11.578: 92.2338% ( 14) 00:08:57.159 11.578 - 11.636: 92.3289% ( 10) 00:08:57.159 11.636 - 11.695: 92.4525% ( 13) 00:08:57.159 11.695 - 11.753: 92.5095% ( 6) 00:08:57.159 11.753 - 11.811: 92.5665% ( 6) 00:08:57.159 11.811 - 11.869: 92.6236% ( 6) 00:08:57.159 11.869 - 11.927: 92.7186% ( 10) 00:08:57.159 11.927 - 11.985: 92.8042% ( 9) 00:08:57.159 11.985 - 12.044: 92.8612% ( 6) 00:08:57.159 12.044 - 12.102: 92.8707% ( 1) 00:08:57.159 12.102 - 12.160: 92.9563% ( 9) 00:08:57.159 12.160 - 12.218: 93.0323% ( 8) 00:08:57.159 12.218 - 12.276: 93.0989% ( 7) 00:08:57.159 12.276 - 12.335: 93.1369% ( 4) 00:08:57.159 12.335 - 12.393: 93.1939% ( 6) 00:08:57.159 12.393 - 12.451: 93.3365% ( 15) 00:08:57.159 12.451 - 12.509: 93.4696% ( 14) 00:08:57.159 12.509 - 12.567: 93.6312% ( 17) 00:08:57.159 12.567 - 12.625: 93.7357% ( 11) 00:08:57.159 12.625 - 12.684: 93.9163% ( 19) 00:08:57.159 12.684 - 12.742: 94.3156% ( 42) 00:08:57.159 12.742 - 12.800: 94.5722% ( 27) 00:08:57.159 12.800 - 12.858: 94.8764% ( 32) 00:08:57.159 12.858 - 12.916: 95.1996% ( 34) 00:08:57.159 12.916 - 12.975: 95.3517% ( 16) 00:08:57.159 12.975 - 13.033: 95.5228% ( 18) 00:08:57.159 13.033 - 13.091: 95.6654% ( 15) 00:08:57.159 13.091 - 13.149: 95.7700% ( 11) 00:08:57.159 13.149 - 13.207: 95.8650% ( 10) 00:08:57.159 13.207 - 13.265: 95.9125% ( 5) 00:08:57.159 13.265 - 13.324: 95.9601% ( 5) 00:08:57.159 13.324 - 13.382: 96.0171% ( 6) 00:08:57.159 13.382 - 13.440: 96.0551% ( 4) 00:08:57.159 13.440 - 13.498: 96.1217% ( 7) 00:08:57.159 13.556 - 13.615: 96.1312% ( 1) 00:08:57.159 13.615 - 13.673: 96.1692% ( 4) 00:08:57.159 13.673 - 13.731: 96.1787% ( 1) 00:08:57.159 13.731 - 13.789: 96.2167% ( 4) 00:08:57.159 13.789 - 13.847: 96.2548% ( 4) 00:08:57.159 13.847 - 13.905: 96.2738% ( 2) 00:08:57.159 13.905 - 13.964: 96.2928% ( 2) 00:08:57.159 13.964 - 14.022: 96.3308% ( 4) 00:08:57.159 14.022 - 14.080: 96.3498% ( 2) 00:08:57.159 14.138 - 14.196: 96.3593% ( 1) 00:08:57.159 14.196 - 14.255: 96.3878% ( 3) 00:08:57.159 14.255 - 14.313: 96.3973% ( 1) 00:08:57.159 14.371 - 14.429: 96.4259% ( 3) 00:08:57.159 14.429 - 14.487: 96.4354% ( 1) 00:08:57.159 14.487 - 14.545: 96.4639% ( 3) 00:08:57.159 14.545 - 14.604: 96.4924% ( 3) 00:08:57.159 14.604 - 14.662: 96.5019% ( 1) 00:08:57.159 14.662 - 14.720: 96.5114% ( 1) 00:08:57.159 14.720 - 14.778: 96.5494% ( 4) 00:08:57.159 14.778 - 14.836: 96.5589% ( 1) 00:08:57.159 14.836 - 14.895: 96.5779% ( 2) 00:08:57.159 14.895 - 15.011: 96.5875% ( 1) 00:08:57.159 15.011 - 15.127: 96.6445% ( 6) 00:08:57.159 15.244 - 15.360: 96.6730% ( 3) 00:08:57.159 15.360 - 15.476: 96.7205% ( 5) 00:08:57.159 15.476 - 15.593: 96.7300% ( 1) 00:08:57.159 15.593 - 15.709: 96.7395% ( 1) 00:08:57.159 15.709 - 15.825: 96.7681% ( 3) 00:08:57.159 15.825 - 15.942: 96.7776% ( 1) 00:08:57.159 15.942 - 16.058: 96.8061% ( 3) 00:08:57.159 16.058 - 16.175: 96.8536% ( 5) 00:08:57.159 16.175 - 16.291: 96.9772% ( 13) 00:08:57.159 16.291 - 16.407: 97.0342% ( 6) 00:08:57.159 16.407 - 16.524: 97.0722% ( 4) 00:08:57.159 16.524 - 16.640: 97.1103% ( 4) 00:08:57.159 16.640 - 16.756: 97.1958% ( 9) 00:08:57.159 16.756 - 16.873: 97.2053% ( 1) 00:08:57.159 16.873 - 16.989: 97.2814% ( 8) 00:08:57.159 16.989 - 17.105: 97.3669% ( 9) 00:08:57.159 17.105 - 17.222: 97.3954% ( 3) 00:08:57.159 17.222 - 17.338: 97.4335% ( 4) 00:08:57.159 17.338 - 17.455: 97.4810% ( 5) 00:08:57.159 17.455 - 17.571: 97.5190% ( 4) 00:08:57.160 17.571 - 17.687: 97.5570% ( 4) 00:08:57.160 17.687 - 17.804: 97.5951% ( 4) 00:08:57.160 17.804 - 17.920: 97.6426% ( 5) 00:08:57.160 17.920 - 18.036: 97.6901% ( 5) 00:08:57.160 18.036 - 18.153: 97.7567% ( 7) 00:08:57.160 18.153 - 18.269: 97.8042% ( 5) 00:08:57.160 18.269 - 18.385: 97.8707% ( 7) 00:08:57.160 18.385 - 18.502: 97.9278% ( 6) 00:08:57.160 18.502 - 18.618: 97.9943% ( 7) 00:08:57.160 18.618 - 18.735: 98.0703% ( 8) 00:08:57.160 18.735 - 18.851: 98.1369% ( 7) 00:08:57.160 18.851 - 18.967: 98.1844% ( 5) 00:08:57.160 18.967 - 19.084: 98.2319% ( 5) 00:08:57.160 19.084 - 19.200: 98.2795% ( 5) 00:08:57.160 19.200 - 19.316: 98.3650% ( 9) 00:08:57.160 19.316 - 19.433: 98.4030% ( 4) 00:08:57.160 19.433 - 19.549: 98.4601% ( 6) 00:08:57.160 19.549 - 19.665: 98.5361% ( 8) 00:08:57.160 19.665 - 19.782: 98.5837% ( 5) 00:08:57.160 19.782 - 19.898: 98.6502% ( 7) 00:08:57.160 19.898 - 20.015: 98.6882% ( 4) 00:08:57.160 20.015 - 20.131: 98.7452% ( 6) 00:08:57.160 20.131 - 20.247: 98.8023% ( 6) 00:08:57.160 20.247 - 20.364: 98.8783% ( 8) 00:08:57.160 20.364 - 20.480: 98.9259% ( 5) 00:08:57.160 20.480 - 20.596: 98.9734% ( 5) 00:08:57.160 20.596 - 20.713: 99.0209% ( 5) 00:08:57.160 20.713 - 20.829: 99.0684% ( 5) 00:08:57.160 20.829 - 20.945: 99.0970% ( 3) 00:08:57.160 20.945 - 21.062: 99.1065% ( 1) 00:08:57.160 21.062 - 21.178: 99.1445% ( 4) 00:08:57.160 21.178 - 21.295: 99.1730% ( 3) 00:08:57.160 21.295 - 21.411: 99.2205% ( 5) 00:08:57.160 21.411 - 21.527: 99.2395% ( 2) 00:08:57.160 21.527 - 21.644: 99.2586% ( 2) 00:08:57.160 21.644 - 21.760: 99.2681% ( 1) 00:08:57.160 21.760 - 21.876: 99.2776% ( 1) 00:08:57.160 22.342 - 22.458: 99.2871% ( 1) 00:08:57.160 22.575 - 22.691: 99.3061% ( 2) 00:08:57.160 23.040 - 23.156: 99.3156% ( 1) 00:08:57.160 23.273 - 23.389: 99.3251% ( 1) 00:08:57.160 23.505 - 23.622: 99.3346% ( 1) 00:08:57.160 23.738 - 23.855: 99.3441% ( 1) 00:08:57.160 23.855 - 23.971: 99.3631% ( 2) 00:08:57.160 23.971 - 24.087: 99.3821% ( 2) 00:08:57.160 24.087 - 24.204: 99.4202% ( 4) 00:08:57.160 24.204 - 24.320: 99.4487% ( 3) 00:08:57.160 24.320 - 24.436: 99.5342% ( 9) 00:08:57.160 24.436 - 24.553: 99.5817% ( 5) 00:08:57.160 24.553 - 24.669: 99.6008% ( 2) 00:08:57.160 24.669 - 24.785: 99.6388% ( 4) 00:08:57.160 24.785 - 24.902: 99.6673% ( 3) 00:08:57.160 24.902 - 25.018: 99.6768% ( 1) 00:08:57.160 25.018 - 25.135: 99.7053% ( 3) 00:08:57.160 25.135 - 25.251: 99.7243% ( 2) 00:08:57.160 25.251 - 25.367: 99.7338% ( 1) 00:08:57.160 25.367 - 25.484: 99.7433% ( 1) 00:08:57.160 25.484 - 25.600: 99.7529% ( 1) 00:08:57.160 25.600 - 25.716: 99.7624% ( 1) 00:08:57.160 25.716 - 25.833: 99.7719% ( 1) 00:08:57.160 25.833 - 25.949: 99.7814% ( 1) 00:08:57.160 25.949 - 26.065: 99.7909% ( 1) 00:08:57.160 26.298 - 26.415: 99.8004% ( 1) 00:08:57.160 26.531 - 26.647: 99.8099% ( 1) 00:08:57.160 26.764 - 26.880: 99.8194% ( 1) 00:08:57.160 26.996 - 27.113: 99.8289% ( 1) 00:08:57.160 27.462 - 27.578: 99.8384% ( 1) 00:08:57.160 27.578 - 27.695: 99.8764% ( 4) 00:08:57.160 27.927 - 28.044: 99.8859% ( 1) 00:08:57.160 28.393 - 28.509: 99.8954% ( 1) 00:08:57.160 28.509 - 28.625: 99.9049% ( 1) 00:08:57.160 28.742 - 28.858: 99.9144% ( 1) 00:08:57.160 28.975 - 29.091: 99.9240% ( 1) 00:08:57.160 29.673 - 29.789: 99.9335% ( 1) 00:08:57.160 33.280 - 33.513: 99.9430% ( 1) 00:08:57.160 35.375 - 35.607: 99.9525% ( 1) 00:08:57.160 37.702 - 37.935: 99.9620% ( 1) 00:08:57.160 38.400 - 38.633: 99.9715% ( 1) 00:08:57.160 40.262 - 40.495: 99.9810% ( 1) 00:08:57.160 40.727 - 40.960: 99.9905% ( 1) 00:08:57.160 101.004 - 101.469: 100.0000% ( 1) 00:08:57.160 00:08:57.160 00:08:57.160 real 0m1.292s 00:08:57.160 user 0m1.110s 00:08:57.160 sys 0m0.135s 00:08:57.160 13:04:49 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:57.160 ************************************ 00:08:57.160 END TEST nvme_overhead 00:08:57.160 ************************************ 00:08:57.160 13:04:49 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:57.160 13:04:49 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:57.160 13:04:49 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:57.160 13:04:49 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:57.160 13:04:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:57.160 ************************************ 00:08:57.160 START TEST nvme_arbitration 00:08:57.160 ************************************ 00:08:57.160 13:04:49 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:01.345 Initializing NVMe Controllers 00:09:01.345 Attached to 0000:00:10.0 00:09:01.345 Attached to 0000:00:11.0 00:09:01.345 Attached to 0000:00:13.0 00:09:01.345 Attached to 0000:00:12.0 00:09:01.345 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:09:01.345 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:09:01.345 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:09:01.345 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:01.345 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:01.345 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:01.345 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:01.345 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:01.345 Initialization complete. Launching workers. 00:09:01.345 Starting thread on core 1 with urgent priority queue 00:09:01.345 Starting thread on core 2 with urgent priority queue 00:09:01.345 Starting thread on core 3 with urgent priority queue 00:09:01.345 Starting thread on core 0 with urgent priority queue 00:09:01.345 QEMU NVMe Ctrl (12340 ) core 0: 640.00 IO/s 156.25 secs/100000 ios 00:09:01.345 QEMU NVMe Ctrl (12342 ) core 0: 640.00 IO/s 156.25 secs/100000 ios 00:09:01.345 QEMU NVMe Ctrl (12341 ) core 1: 682.67 IO/s 146.48 secs/100000 ios 00:09:01.345 QEMU NVMe Ctrl (12342 ) core 1: 682.67 IO/s 146.48 secs/100000 ios 00:09:01.345 QEMU NVMe Ctrl (12343 ) core 2: 682.67 IO/s 146.48 secs/100000 ios 00:09:01.345 QEMU NVMe Ctrl (12342 ) core 3: 640.00 IO/s 156.25 secs/100000 ios 00:09:01.345 ======================================================== 00:09:01.345 00:09:01.345 00:09:01.345 real 0m3.422s 00:09:01.345 user 0m9.373s 00:09:01.345 sys 0m0.151s 00:09:01.345 13:04:52 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:01.345 ************************************ 00:09:01.345 END TEST nvme_arbitration 00:09:01.345 ************************************ 00:09:01.345 13:04:52 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:09:01.345 13:04:52 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:01.345 13:04:52 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:01.345 13:04:52 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:01.345 13:04:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:01.345 ************************************ 00:09:01.345 START TEST nvme_single_aen 00:09:01.345 ************************************ 00:09:01.345 13:04:52 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:01.345 Asynchronous Event Request test 00:09:01.345 Attached to 0000:00:10.0 00:09:01.345 Attached to 0000:00:11.0 00:09:01.345 Attached to 0000:00:13.0 00:09:01.345 Attached to 0000:00:12.0 00:09:01.345 Reset controller to setup AER completions for this process 00:09:01.345 Registering asynchronous event callbacks... 00:09:01.345 Getting orig temperature thresholds of all controllers 00:09:01.345 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:01.345 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:01.345 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:01.345 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:01.345 Setting all controllers temperature threshold low to trigger AER 00:09:01.345 Waiting for all controllers temperature threshold to be set lower 00:09:01.345 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:01.345 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:01.345 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:01.345 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:01.345 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:01.345 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:01.345 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:01.345 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:01.345 Waiting for all controllers to trigger AER and reset threshold 00:09:01.345 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:01.345 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:01.345 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:01.345 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:01.345 Cleaning up... 00:09:01.345 00:09:01.345 real 0m0.296s 00:09:01.345 user 0m0.116s 00:09:01.345 sys 0m0.137s 00:09:01.345 13:04:53 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:01.345 ************************************ 00:09:01.345 END TEST nvme_single_aen 00:09:01.345 ************************************ 00:09:01.345 13:04:53 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:09:01.345 13:04:53 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:01.345 13:04:53 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:01.345 13:04:53 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:01.345 13:04:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:01.345 ************************************ 00:09:01.345 START TEST nvme_doorbell_aers 00:09:01.345 ************************************ 00:09:01.345 13:04:53 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:09:01.345 13:04:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:09:01.345 13:04:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:01.345 13:04:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:01.345 13:04:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:01.345 13:04:53 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=() 00:09:01.345 13:04:53 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # local bdfs 00:09:01.345 13:04:53 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:01.345 13:04:53 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:01.345 13:04:53 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:09:01.346 13:04:53 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:09:01.346 13:04:53 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:01.346 13:04:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:01.346 13:04:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:01.346 [2024-07-25 13:04:53.457817] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68563) is not found. Dropping the request. 00:09:11.320 Executing: test_write_invalid_db 00:09:11.320 Waiting for AER completion... 00:09:11.320 Failure: test_write_invalid_db 00:09:11.320 00:09:11.320 Executing: test_invalid_db_write_overflow_sq 00:09:11.320 Waiting for AER completion... 00:09:11.320 Failure: test_invalid_db_write_overflow_sq 00:09:11.320 00:09:11.320 Executing: test_invalid_db_write_overflow_cq 00:09:11.320 Waiting for AER completion... 00:09:11.320 Failure: test_invalid_db_write_overflow_cq 00:09:11.320 00:09:11.320 13:05:03 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:11.320 13:05:03 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:11.320 [2024-07-25 13:05:03.496809] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68563) is not found. Dropping the request. 00:09:21.328 Executing: test_write_invalid_db 00:09:21.328 Waiting for AER completion... 00:09:21.328 Failure: test_write_invalid_db 00:09:21.328 00:09:21.328 Executing: test_invalid_db_write_overflow_sq 00:09:21.328 Waiting for AER completion... 00:09:21.328 Failure: test_invalid_db_write_overflow_sq 00:09:21.328 00:09:21.328 Executing: test_invalid_db_write_overflow_cq 00:09:21.328 Waiting for AER completion... 00:09:21.328 Failure: test_invalid_db_write_overflow_cq 00:09:21.328 00:09:21.328 13:05:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:21.328 13:05:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:21.601 [2024-07-25 13:05:13.536748] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68563) is not found. Dropping the request. 00:09:31.571 Executing: test_write_invalid_db 00:09:31.571 Waiting for AER completion... 00:09:31.571 Failure: test_write_invalid_db 00:09:31.571 00:09:31.571 Executing: test_invalid_db_write_overflow_sq 00:09:31.571 Waiting for AER completion... 00:09:31.571 Failure: test_invalid_db_write_overflow_sq 00:09:31.571 00:09:31.571 Executing: test_invalid_db_write_overflow_cq 00:09:31.571 Waiting for AER completion... 00:09:31.571 Failure: test_invalid_db_write_overflow_cq 00:09:31.571 00:09:31.571 13:05:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:31.571 13:05:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:31.571 [2024-07-25 13:05:23.590306] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68563) is not found. Dropping the request. 00:09:41.546 Executing: test_write_invalid_db 00:09:41.546 Waiting for AER completion... 00:09:41.546 Failure: test_write_invalid_db 00:09:41.546 00:09:41.546 Executing: test_invalid_db_write_overflow_sq 00:09:41.546 Waiting for AER completion... 00:09:41.546 Failure: test_invalid_db_write_overflow_sq 00:09:41.546 00:09:41.546 Executing: test_invalid_db_write_overflow_cq 00:09:41.546 Waiting for AER completion... 00:09:41.546 Failure: test_invalid_db_write_overflow_cq 00:09:41.546 00:09:41.546 00:09:41.546 real 0m40.251s 00:09:41.546 user 0m34.115s 00:09:41.546 sys 0m5.762s 00:09:41.546 13:05:33 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:41.546 13:05:33 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:09:41.546 ************************************ 00:09:41.546 END TEST nvme_doorbell_aers 00:09:41.546 ************************************ 00:09:41.546 13:05:33 nvme -- nvme/nvme.sh@97 -- # uname 00:09:41.546 13:05:33 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:41.546 13:05:33 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:41.546 13:05:33 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:09:41.546 13:05:33 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:41.546 13:05:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:41.546 ************************************ 00:09:41.546 START TEST nvme_multi_aen 00:09:41.546 ************************************ 00:09:41.546 13:05:33 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:41.546 [2024-07-25 13:05:33.681771] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68563) is not found. Dropping the request. 00:09:41.546 [2024-07-25 13:05:33.681894] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68563) is not found. Dropping the request. 00:09:41.546 [2024-07-25 13:05:33.681927] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68563) is not found. Dropping the request. 00:09:41.546 [2024-07-25 13:05:33.683651] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68563) is not found. Dropping the request. 00:09:41.546 [2024-07-25 13:05:33.683714] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68563) is not found. Dropping the request. 00:09:41.546 [2024-07-25 13:05:33.683734] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68563) is not found. Dropping the request. 00:09:41.546 [2024-07-25 13:05:33.685383] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68563) is not found. Dropping the request. 00:09:41.546 [2024-07-25 13:05:33.685501] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68563) is not found. Dropping the request. 00:09:41.546 [2024-07-25 13:05:33.685701] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68563) is not found. Dropping the request. 00:09:41.546 [2024-07-25 13:05:33.687289] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68563) is not found. Dropping the request. 00:09:41.546 [2024-07-25 13:05:33.687502] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68563) is not found. Dropping the request. 00:09:41.546 [2024-07-25 13:05:33.687661] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68563) is not found. Dropping the request. 00:09:41.546 Child process pid: 69079 00:09:41.805 [Child] Asynchronous Event Request test 00:09:41.805 [Child] Attached to 0000:00:10.0 00:09:41.805 [Child] Attached to 0000:00:11.0 00:09:41.805 [Child] Attached to 0000:00:13.0 00:09:41.805 [Child] Attached to 0000:00:12.0 00:09:41.805 [Child] Registering asynchronous event callbacks... 00:09:41.805 [Child] Getting orig temperature thresholds of all controllers 00:09:41.805 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:41.805 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:41.805 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:41.805 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:41.805 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:41.805 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:41.805 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:41.805 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:41.805 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:41.805 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:41.805 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:41.805 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:41.805 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:41.805 [Child] Cleaning up... 00:09:42.064 Asynchronous Event Request test 00:09:42.064 Attached to 0000:00:10.0 00:09:42.064 Attached to 0000:00:11.0 00:09:42.064 Attached to 0000:00:13.0 00:09:42.064 Attached to 0000:00:12.0 00:09:42.064 Reset controller to setup AER completions for this process 00:09:42.064 Registering asynchronous event callbacks... 00:09:42.064 Getting orig temperature thresholds of all controllers 00:09:42.064 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:42.064 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:42.064 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:42.064 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:42.064 Setting all controllers temperature threshold low to trigger AER 00:09:42.064 Waiting for all controllers temperature threshold to be set lower 00:09:42.064 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:42.064 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:42.064 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:42.064 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:42.064 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:42.064 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:42.064 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:42.064 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:42.064 Waiting for all controllers to trigger AER and reset threshold 00:09:42.064 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:42.064 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:42.064 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:42.064 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:42.064 Cleaning up... 00:09:42.064 00:09:42.064 real 0m0.635s 00:09:42.064 user 0m0.220s 00:09:42.064 sys 0m0.275s 00:09:42.064 13:05:34 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:42.064 13:05:34 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:42.064 ************************************ 00:09:42.064 END TEST nvme_multi_aen 00:09:42.064 ************************************ 00:09:42.064 13:05:34 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:42.064 13:05:34 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:42.064 13:05:34 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:42.064 13:05:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:42.064 ************************************ 00:09:42.064 START TEST nvme_startup 00:09:42.064 ************************************ 00:09:42.064 13:05:34 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:42.322 Initializing NVMe Controllers 00:09:42.322 Attached to 0000:00:10.0 00:09:42.322 Attached to 0000:00:11.0 00:09:42.322 Attached to 0000:00:13.0 00:09:42.322 Attached to 0000:00:12.0 00:09:42.322 Initialization complete. 00:09:42.322 Time used:194937.594 (us). 00:09:42.322 ************************************ 00:09:42.322 END TEST nvme_startup 00:09:42.322 ************************************ 00:09:42.322 00:09:42.322 real 0m0.287s 00:09:42.322 user 0m0.119s 00:09:42.322 sys 0m0.125s 00:09:42.322 13:05:34 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:42.322 13:05:34 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:42.322 13:05:34 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:42.322 13:05:34 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:42.322 13:05:34 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:42.322 13:05:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:42.322 ************************************ 00:09:42.322 START TEST nvme_multi_secondary 00:09:42.322 ************************************ 00:09:42.322 13:05:34 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:09:42.322 13:05:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=69135 00:09:42.323 13:05:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:42.323 13:05:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=69136 00:09:42.323 13:05:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:42.323 13:05:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:45.645 Initializing NVMe Controllers 00:09:45.645 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:45.645 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:45.645 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:45.645 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:45.645 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:45.645 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:45.645 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:45.645 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:45.645 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:45.645 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:45.645 Initialization complete. Launching workers. 00:09:45.645 ======================================================== 00:09:45.645 Latency(us) 00:09:45.645 Device Information : IOPS MiB/s Average min max 00:09:45.645 PCIE (0000:00:10.0) NSID 1 from core 1: 5204.17 20.33 3072.58 1186.19 8592.58 00:09:45.645 PCIE (0000:00:11.0) NSID 1 from core 1: 5204.17 20.33 3073.92 1231.00 8567.67 00:09:45.645 PCIE (0000:00:13.0) NSID 1 from core 1: 5204.17 20.33 3073.88 1237.76 7999.82 00:09:45.645 PCIE (0000:00:12.0) NSID 1 from core 1: 5204.17 20.33 3073.80 1194.42 8269.76 00:09:45.645 PCIE (0000:00:12.0) NSID 2 from core 1: 5204.17 20.33 3073.95 1207.11 7711.41 00:09:45.645 PCIE (0000:00:12.0) NSID 3 from core 1: 5204.17 20.33 3073.93 1212.86 8573.51 00:09:45.645 ======================================================== 00:09:45.645 Total : 31225.01 121.97 3073.68 1186.19 8592.58 00:09:45.645 00:09:45.904 Initializing NVMe Controllers 00:09:45.904 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:45.904 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:45.904 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:45.904 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:45.904 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:45.904 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:45.904 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:45.904 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:45.904 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:45.904 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:45.904 Initialization complete. Launching workers. 00:09:45.904 ======================================================== 00:09:45.904 Latency(us) 00:09:45.904 Device Information : IOPS MiB/s Average min max 00:09:45.904 PCIE (0000:00:10.0) NSID 1 from core 2: 2175.20 8.50 7351.69 1765.93 20478.84 00:09:45.904 PCIE (0000:00:11.0) NSID 1 from core 2: 2175.20 8.50 7352.62 1991.06 16846.18 00:09:45.904 PCIE (0000:00:13.0) NSID 1 from core 2: 2175.20 8.50 7354.94 1675.71 20082.16 00:09:45.904 PCIE (0000:00:12.0) NSID 1 from core 2: 2175.20 8.50 7354.79 1522.36 20453.36 00:09:45.904 PCIE (0000:00:12.0) NSID 2 from core 2: 2175.20 8.50 7354.59 1153.57 20646.09 00:09:45.904 PCIE (0000:00:12.0) NSID 3 from core 2: 2175.20 8.50 7354.48 1124.23 20762.76 00:09:45.904 ======================================================== 00:09:45.904 Total : 13051.21 50.98 7353.85 1124.23 20762.76 00:09:45.904 00:09:45.904 13:05:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 69135 00:09:47.806 Initializing NVMe Controllers 00:09:47.806 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:47.806 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:47.806 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:47.806 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:47.806 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:47.806 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:47.806 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:47.806 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:47.806 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:47.806 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:47.806 Initialization complete. Launching workers. 00:09:47.807 ======================================================== 00:09:47.807 Latency(us) 00:09:47.807 Device Information : IOPS MiB/s Average min max 00:09:47.807 PCIE (0000:00:10.0) NSID 1 from core 0: 8111.22 31.68 1970.98 981.09 7729.84 00:09:47.807 PCIE (0000:00:11.0) NSID 1 from core 0: 8111.22 31.68 1972.06 991.04 7264.13 00:09:47.807 PCIE (0000:00:13.0) NSID 1 from core 0: 8111.22 31.68 1972.00 974.09 7640.12 00:09:47.807 PCIE (0000:00:12.0) NSID 1 from core 0: 8111.22 31.68 1971.96 918.50 8067.18 00:09:47.807 PCIE (0000:00:12.0) NSID 2 from core 0: 8111.22 31.68 1971.88 898.49 7995.66 00:09:47.807 PCIE (0000:00:12.0) NSID 3 from core 0: 8111.22 31.68 1971.85 865.10 7872.75 00:09:47.807 ======================================================== 00:09:47.807 Total : 48667.32 190.11 1971.79 865.10 8067.18 00:09:47.807 00:09:48.065 13:05:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 69136 00:09:48.065 13:05:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:48.065 13:05:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=69206 00:09:48.065 13:05:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=69207 00:09:48.065 13:05:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:48.065 13:05:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:51.356 Initializing NVMe Controllers 00:09:51.356 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:51.356 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:51.356 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:51.356 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:51.356 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:51.356 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:51.356 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:51.356 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:51.356 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:51.356 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:51.356 Initialization complete. Launching workers. 00:09:51.356 ======================================================== 00:09:51.356 Latency(us) 00:09:51.356 Device Information : IOPS MiB/s Average min max 00:09:51.356 PCIE (0000:00:10.0) NSID 1 from core 0: 5068.83 19.80 3154.69 1021.53 6953.17 00:09:51.356 PCIE (0000:00:11.0) NSID 1 from core 0: 5068.83 19.80 3156.11 1075.91 7319.82 00:09:51.356 PCIE (0000:00:13.0) NSID 1 from core 0: 5068.83 19.80 3156.00 1089.48 7730.72 00:09:51.356 PCIE (0000:00:12.0) NSID 1 from core 0: 5068.83 19.80 3155.92 1081.47 7699.44 00:09:51.356 PCIE (0000:00:12.0) NSID 2 from core 0: 5068.83 19.80 3155.86 1076.29 7841.71 00:09:51.356 PCIE (0000:00:12.0) NSID 3 from core 0: 5068.83 19.80 3155.76 1049.35 7358.17 00:09:51.356 ======================================================== 00:09:51.356 Total : 30412.98 118.80 3155.72 1021.53 7841.71 00:09:51.356 00:09:51.616 Initializing NVMe Controllers 00:09:51.616 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:51.616 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:51.616 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:51.616 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:51.616 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:51.616 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:51.616 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:51.616 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:51.616 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:51.616 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:51.616 Initialization complete. Launching workers. 00:09:51.616 ======================================================== 00:09:51.616 Latency(us) 00:09:51.616 Device Information : IOPS MiB/s Average min max 00:09:51.616 PCIE (0000:00:10.0) NSID 1 from core 1: 5257.38 20.54 3041.45 991.86 6618.64 00:09:51.616 PCIE (0000:00:11.0) NSID 1 from core 1: 5257.38 20.54 3042.64 1020.65 6444.64 00:09:51.616 PCIE (0000:00:13.0) NSID 1 from core 1: 5257.38 20.54 3042.51 1005.77 6252.11 00:09:51.616 PCIE (0000:00:12.0) NSID 1 from core 1: 5257.38 20.54 3042.38 1007.04 6278.36 00:09:51.616 PCIE (0000:00:12.0) NSID 2 from core 1: 5257.38 20.54 3042.26 1018.48 6370.70 00:09:51.616 PCIE (0000:00:12.0) NSID 3 from core 1: 5257.38 20.54 3042.13 963.45 6089.21 00:09:51.616 ======================================================== 00:09:51.616 Total : 31544.26 123.22 3042.23 963.45 6618.64 00:09:51.616 00:09:53.545 Initializing NVMe Controllers 00:09:53.545 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:53.545 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:53.545 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:53.545 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:53.545 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:53.545 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:53.545 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:53.545 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:53.545 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:53.545 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:53.545 Initialization complete. Launching workers. 00:09:53.545 ======================================================== 00:09:53.545 Latency(us) 00:09:53.545 Device Information : IOPS MiB/s Average min max 00:09:53.545 PCIE (0000:00:10.0) NSID 1 from core 2: 3515.02 13.73 4549.48 1001.69 13825.01 00:09:53.545 PCIE (0000:00:11.0) NSID 1 from core 2: 3515.02 13.73 4551.18 1014.37 13776.82 00:09:53.545 PCIE (0000:00:13.0) NSID 1 from core 2: 3515.02 13.73 4550.88 998.86 13848.54 00:09:53.545 PCIE (0000:00:12.0) NSID 1 from core 2: 3515.02 13.73 4550.82 1008.85 13802.27 00:09:53.545 PCIE (0000:00:12.0) NSID 2 from core 2: 3515.02 13.73 4551.19 961.49 14078.72 00:09:53.545 PCIE (0000:00:12.0) NSID 3 from core 2: 3515.02 13.73 4550.92 907.45 13851.52 00:09:53.545 ======================================================== 00:09:53.545 Total : 21090.12 82.38 4550.75 907.45 14078.72 00:09:53.545 00:09:53.545 ************************************ 00:09:53.545 END TEST nvme_multi_secondary 00:09:53.545 ************************************ 00:09:53.545 13:05:45 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 69206 00:09:53.545 13:05:45 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 69207 00:09:53.545 00:09:53.545 real 0m10.976s 00:09:53.545 user 0m18.630s 00:09:53.545 sys 0m0.903s 00:09:53.545 13:05:45 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:53.545 13:05:45 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:53.545 13:05:45 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:53.545 13:05:45 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:53.545 13:05:45 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/68148 ]] 00:09:53.545 13:05:45 nvme -- common/autotest_common.sh@1090 -- # kill 68148 00:09:53.545 13:05:45 nvme -- common/autotest_common.sh@1091 -- # wait 68148 00:09:53.545 [2024-07-25 13:05:45.449299] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69078) is not found. Dropping the request. 00:09:53.545 [2024-07-25 13:05:45.450245] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69078) is not found. Dropping the request. 00:09:53.545 [2024-07-25 13:05:45.450291] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69078) is not found. Dropping the request. 00:09:53.545 [2024-07-25 13:05:45.450314] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69078) is not found. Dropping the request. 00:09:53.545 [2024-07-25 13:05:45.452221] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69078) is not found. Dropping the request. 00:09:53.545 [2024-07-25 13:05:45.452273] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69078) is not found. Dropping the request. 00:09:53.546 [2024-07-25 13:05:45.452294] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69078) is not found. Dropping the request. 00:09:53.546 [2024-07-25 13:05:45.452328] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69078) is not found. Dropping the request. 00:09:53.546 [2024-07-25 13:05:45.454195] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69078) is not found. Dropping the request. 00:09:53.546 [2024-07-25 13:05:45.454243] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69078) is not found. Dropping the request. 00:09:53.546 [2024-07-25 13:05:45.454263] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69078) is not found. Dropping the request. 00:09:53.546 [2024-07-25 13:05:45.454282] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69078) is not found. Dropping the request. 00:09:53.546 [2024-07-25 13:05:45.456214] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69078) is not found. Dropping the request. 00:09:53.546 [2024-07-25 13:05:45.456264] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69078) is not found. Dropping the request. 00:09:53.546 [2024-07-25 13:05:45.456284] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69078) is not found. Dropping the request. 00:09:53.546 [2024-07-25 13:05:45.456302] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69078) is not found. Dropping the request. 00:09:53.546 [2024-07-25 13:05:45.710575] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:09:53.546 13:05:45 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:09:53.546 13:05:45 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:09:53.546 13:05:45 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:53.546 13:05:45 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:53.546 13:05:45 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:53.546 13:05:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:53.805 ************************************ 00:09:53.805 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:53.805 ************************************ 00:09:53.805 13:05:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:53.805 * Looking for test storage... 00:09:53.805 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:53.805 13:05:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:53.805 13:05:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:53.805 13:05:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:53.805 13:05:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:53.805 13:05:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:53.805 13:05:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:53.805 13:05:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=() 00:09:53.805 13:05:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # local bdfs 00:09:53.805 13:05:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:09:53.805 13:05:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:09:53.805 13:05:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=() 00:09:53.805 13:05:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # local bdfs 00:09:53.805 13:05:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:53.805 13:05:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:53.805 13:05:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:09:53.805 13:05:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:09:53.805 13:05:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:53.805 13:05:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:09:53.805 13:05:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:53.805 13:05:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:53.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.805 13:05:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=69361 00:09:53.805 13:05:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:53.805 13:05:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:53.805 13:05:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 69361 00:09:53.805 13:05:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 69361 ']' 00:09:53.805 13:05:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.805 13:05:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:53.805 13:05:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.805 13:05:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:53.805 13:05:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:54.063 [2024-07-25 13:05:46.025758] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:54.063 [2024-07-25 13:05:46.026168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69361 ] 00:09:54.063 [2024-07-25 13:05:46.225942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:54.321 [2024-07-25 13:05:46.479168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:54.321 [2024-07-25 13:05:46.479311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:54.321 [2024-07-25 13:05:46.479454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.321 [2024-07-25 13:05:46.479460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:55.254 13:05:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:55.254 13:05:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:09:55.254 13:05:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:55.254 13:05:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.254 13:05:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:55.254 nvme0n1 00:09:55.254 13:05:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.254 13:05:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:55.254 13:05:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_p8kU3.txt 00:09:55.254 13:05:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:55.254 13:05:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.254 13:05:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:55.254 true 00:09:55.254 13:05:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.254 13:05:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:55.254 13:05:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1721912747 00:09:55.254 13:05:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=69389 00:09:55.254 13:05:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:55.254 13:05:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:55.254 13:05:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:57.156 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:57.156 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.156 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:57.156 [2024-07-25 13:05:49.302606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:57.156 [2024-07-25 13:05:49.302959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:57.156 [2024-07-25 13:05:49.302992] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:57.156 [2024-07-25 13:05:49.303015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:57.156 [2024-07-25 13:05:49.304923] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:57.156 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 69389 00:09:57.156 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.156 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 69389 00:09:57.156 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 69389 00:09:57.156 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:57.156 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:57.156 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:57.156 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.156 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:57.156 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.156 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:57.156 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_p8kU3.txt 00:09:57.414 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:57.414 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:57.415 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:57.415 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:57.415 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:57.415 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:57.415 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:57.415 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:57.415 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:57.415 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:57.415 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:57.415 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:57.415 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:57.415 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:57.415 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:57.415 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:57.415 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:57.415 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:57.415 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:57.415 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_p8kU3.txt 00:09:57.415 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 69361 00:09:57.415 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 69361 ']' 00:09:57.415 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 69361 00:09:57.415 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:09:57.415 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:57.415 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69361 00:09:57.415 killing process with pid 69361 00:09:57.415 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:57.415 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:57.415 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69361' 00:09:57.415 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 69361 00:09:57.415 13:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 69361 00:09:59.948 13:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:59.948 13:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:59.948 ************************************ 00:09:59.948 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:59.948 ************************************ 00:09:59.948 00:09:59.948 real 0m5.805s 00:09:59.948 user 0m19.886s 00:09:59.948 sys 0m0.587s 00:09:59.948 13:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:59.948 13:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:59.948 13:05:51 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:59.948 13:05:51 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:59.948 13:05:51 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:59.948 13:05:51 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:59.948 13:05:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:59.948 ************************************ 00:09:59.948 START TEST nvme_fio 00:09:59.948 ************************************ 00:09:59.948 13:05:51 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:09:59.948 13:05:51 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:59.948 13:05:51 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:59.948 13:05:51 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:59.948 13:05:51 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=() 00:09:59.948 13:05:51 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # local bdfs 00:09:59.948 13:05:51 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:59.948 13:05:51 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:09:59.948 13:05:51 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:59.948 13:05:51 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:09:59.948 13:05:51 nvme.nvme_fio -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:59.948 13:05:51 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:59.948 13:05:51 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:59.948 13:05:51 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:59.948 13:05:51 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:59.948 13:05:51 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:59.948 13:05:51 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:59.948 13:05:51 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:00.208 13:05:52 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:00.208 13:05:52 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:00.208 13:05:52 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:00.208 13:05:52 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:00.208 13:05:52 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:00.208 13:05:52 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:00.208 13:05:52 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:00.208 13:05:52 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:00.208 13:05:52 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:00.208 13:05:52 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:00.208 13:05:52 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:00.208 13:05:52 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:00.208 13:05:52 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:00.208 13:05:52 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:00.208 13:05:52 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:00.208 13:05:52 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:00.208 13:05:52 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:00.208 13:05:52 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:00.467 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:00.467 fio-3.35 00:10:00.467 Starting 1 thread 00:10:03.775 00:10:03.775 test: (groupid=0, jobs=1): err= 0: pid=69542: Thu Jul 25 13:05:55 2024 00:10:03.775 read: IOPS=15.8k, BW=61.7MiB/s (64.7MB/s)(123MiB/2001msec) 00:10:03.775 slat (nsec): min=4413, max=49922, avg=6282.68, stdev=2013.00 00:10:03.775 clat (usec): min=260, max=10445, avg=4025.66, stdev=604.39 00:10:03.775 lat (usec): min=265, max=10488, avg=4031.94, stdev=605.20 00:10:03.775 clat percentiles (usec): 00:10:03.775 | 1.00th=[ 2835], 5.00th=[ 3294], 10.00th=[ 3425], 20.00th=[ 3556], 00:10:03.775 | 30.00th=[ 3654], 40.00th=[ 3818], 50.00th=[ 4146], 60.00th=[ 4228], 00:10:03.775 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4555], 00:10:03.775 | 99.00th=[ 6652], 99.50th=[ 7504], 99.90th=[ 8094], 99.95th=[ 8979], 00:10:03.775 | 99.99th=[10290] 00:10:03.775 bw ( KiB/s): min=60096, max=64992, per=100.00%, avg=63282.67, stdev=2762.17, samples=3 00:10:03.775 iops : min=15024, max=16248, avg=15820.67, stdev=690.54, samples=3 00:10:03.775 write: IOPS=15.8k, BW=61.8MiB/s (64.8MB/s)(124MiB/2001msec); 0 zone resets 00:10:03.775 slat (nsec): min=4374, max=48490, avg=6459.08, stdev=1990.16 00:10:03.775 clat (usec): min=241, max=10300, avg=4042.34, stdev=611.08 00:10:03.775 lat (usec): min=247, max=10317, avg=4048.80, stdev=611.86 00:10:03.775 clat percentiles (usec): 00:10:03.775 | 1.00th=[ 2835], 5.00th=[ 3294], 10.00th=[ 3425], 20.00th=[ 3589], 00:10:03.775 | 30.00th=[ 3687], 40.00th=[ 3884], 50.00th=[ 4146], 60.00th=[ 4228], 00:10:03.775 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4621], 00:10:03.775 | 99.00th=[ 6783], 99.50th=[ 7439], 99.90th=[ 8225], 99.95th=[ 9110], 00:10:03.775 | 99.99th=[10159] 00:10:03.775 bw ( KiB/s): min=59448, max=65144, per=99.56%, avg=62973.33, stdev=3080.17, samples=3 00:10:03.775 iops : min=14862, max=16286, avg=15743.33, stdev=770.04, samples=3 00:10:03.775 lat (usec) : 250=0.01%, 500=0.03%, 750=0.01%, 1000=0.02% 00:10:03.775 lat (msec) : 2=0.07%, 4=42.57%, 10=57.28%, 20=0.02% 00:10:03.775 cpu : usr=98.80%, sys=0.20%, ctx=2, majf=0, minf=606 00:10:03.775 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:03.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.775 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:03.775 issued rwts: total=31602,31641,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.775 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:03.775 00:10:03.775 Run status group 0 (all jobs): 00:10:03.775 READ: bw=61.7MiB/s (64.7MB/s), 61.7MiB/s-61.7MiB/s (64.7MB/s-64.7MB/s), io=123MiB (129MB), run=2001-2001msec 00:10:03.775 WRITE: bw=61.8MiB/s (64.8MB/s), 61.8MiB/s-61.8MiB/s (64.8MB/s-64.8MB/s), io=124MiB (130MB), run=2001-2001msec 00:10:03.775 ----------------------------------------------------- 00:10:03.775 Suppressions used: 00:10:03.775 count bytes template 00:10:03.775 1 32 /usr/src/fio/parse.c 00:10:03.775 1 8 libtcmalloc_minimal.so 00:10:03.775 ----------------------------------------------------- 00:10:03.775 00:10:03.775 13:05:55 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:03.775 13:05:55 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:03.775 13:05:55 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:03.775 13:05:55 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:04.033 13:05:55 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:04.033 13:05:55 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:04.291 13:05:56 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:04.291 13:05:56 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:04.291 13:05:56 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:04.291 13:05:56 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:04.291 13:05:56 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:04.291 13:05:56 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:04.291 13:05:56 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:04.291 13:05:56 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:04.291 13:05:56 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:04.291 13:05:56 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:04.291 13:05:56 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:04.291 13:05:56 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:04.291 13:05:56 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:04.291 13:05:56 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:04.291 13:05:56 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:04.291 13:05:56 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:04.291 13:05:56 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:04.291 13:05:56 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:04.291 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:04.291 fio-3.35 00:10:04.291 Starting 1 thread 00:10:07.572 00:10:07.572 test: (groupid=0, jobs=1): err= 0: pid=69603: Thu Jul 25 13:05:59 2024 00:10:07.572 read: IOPS=15.8k, BW=61.6MiB/s (64.6MB/s)(123MiB/2001msec) 00:10:07.572 slat (usec): min=4, max=621, avg= 6.36, stdev= 4.11 00:10:07.572 clat (usec): min=293, max=10364, avg=4036.81, stdev=769.23 00:10:07.572 lat (usec): min=299, max=10435, avg=4043.17, stdev=770.16 00:10:07.572 clat percentiles (usec): 00:10:07.572 | 1.00th=[ 2900], 5.00th=[ 3294], 10.00th=[ 3425], 20.00th=[ 3523], 00:10:07.572 | 30.00th=[ 3589], 40.00th=[ 3720], 50.00th=[ 3916], 60.00th=[ 4146], 00:10:07.572 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4621], 95.00th=[ 5080], 00:10:07.572 | 99.00th=[ 7308], 99.50th=[ 8029], 99.90th=[ 9110], 99.95th=[ 9634], 00:10:07.572 | 99.99th=[10159] 00:10:07.572 bw ( KiB/s): min=60016, max=64472, per=99.53%, avg=62802.67, stdev=2429.06, samples=3 00:10:07.572 iops : min=15004, max=16118, avg=15700.67, stdev=607.26, samples=3 00:10:07.572 write: IOPS=15.8k, BW=61.7MiB/s (64.7MB/s)(123MiB/2001msec); 0 zone resets 00:10:07.572 slat (usec): min=4, max=370, avg= 6.48, stdev= 3.31 00:10:07.572 clat (usec): min=382, max=10171, avg=4042.90, stdev=753.66 00:10:07.572 lat (usec): min=388, max=10186, avg=4049.38, stdev=754.57 00:10:07.572 clat percentiles (usec): 00:10:07.572 | 1.00th=[ 2900], 5.00th=[ 3294], 10.00th=[ 3425], 20.00th=[ 3523], 00:10:07.572 | 30.00th=[ 3621], 40.00th=[ 3720], 50.00th=[ 3949], 60.00th=[ 4178], 00:10:07.572 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4621], 95.00th=[ 5014], 00:10:07.572 | 99.00th=[ 7308], 99.50th=[ 7963], 99.90th=[ 9110], 99.95th=[ 9634], 00:10:07.572 | 99.99th=[ 9896] 00:10:07.572 bw ( KiB/s): min=60432, max=63800, per=98.83%, avg=62405.33, stdev=1756.99, samples=3 00:10:07.572 iops : min=15108, max=15950, avg=15601.33, stdev=439.25, samples=3 00:10:07.572 lat (usec) : 500=0.01%, 750=0.01% 00:10:07.572 lat (msec) : 2=0.08%, 4=51.83%, 10=48.06%, 20=0.01% 00:10:07.572 cpu : usr=98.00%, sys=0.60%, ctx=28, majf=0, minf=607 00:10:07.572 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:07.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:07.572 issued rwts: total=31565,31588,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.572 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:07.572 00:10:07.572 Run status group 0 (all jobs): 00:10:07.572 READ: bw=61.6MiB/s (64.6MB/s), 61.6MiB/s-61.6MiB/s (64.6MB/s-64.6MB/s), io=123MiB (129MB), run=2001-2001msec 00:10:07.572 WRITE: bw=61.7MiB/s (64.7MB/s), 61.7MiB/s-61.7MiB/s (64.7MB/s-64.7MB/s), io=123MiB (129MB), run=2001-2001msec 00:10:07.830 ----------------------------------------------------- 00:10:07.830 Suppressions used: 00:10:07.830 count bytes template 00:10:07.830 1 32 /usr/src/fio/parse.c 00:10:07.830 1 8 libtcmalloc_minimal.so 00:10:07.830 ----------------------------------------------------- 00:10:07.830 00:10:07.830 13:05:59 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:07.830 13:05:59 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:07.830 13:05:59 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:07.830 13:05:59 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:08.088 13:06:00 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:08.088 13:06:00 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:08.347 13:06:00 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:08.347 13:06:00 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:08.347 13:06:00 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:08.347 13:06:00 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:08.347 13:06:00 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:08.347 13:06:00 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:08.347 13:06:00 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:08.347 13:06:00 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:08.347 13:06:00 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:08.347 13:06:00 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:08.347 13:06:00 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:08.347 13:06:00 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:08.347 13:06:00 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:08.347 13:06:00 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:08.347 13:06:00 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:08.347 13:06:00 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:08.347 13:06:00 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:08.347 13:06:00 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:08.606 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:08.606 fio-3.35 00:10:08.606 Starting 1 thread 00:10:11.891 00:10:11.891 test: (groupid=0, jobs=1): err= 0: pid=69664: Thu Jul 25 13:06:03 2024 00:10:11.891 read: IOPS=16.8k, BW=65.4MiB/s (68.6MB/s)(131MiB/2001msec) 00:10:11.891 slat (usec): min=4, max=119, avg= 5.83, stdev= 1.99 00:10:11.891 clat (usec): min=228, max=9963, avg=3794.83, stdev=604.13 00:10:11.891 lat (usec): min=233, max=10082, avg=3800.65, stdev=605.01 00:10:11.891 clat percentiles (usec): 00:10:11.891 | 1.00th=[ 2868], 5.00th=[ 3228], 10.00th=[ 3392], 20.00th=[ 3490], 00:10:11.891 | 30.00th=[ 3556], 40.00th=[ 3589], 50.00th=[ 3654], 60.00th=[ 3687], 00:10:11.891 | 70.00th=[ 3785], 80.00th=[ 4047], 90.00th=[ 4359], 95.00th=[ 4621], 00:10:11.891 | 99.00th=[ 6521], 99.50th=[ 6915], 99.90th=[ 8094], 99.95th=[ 8455], 00:10:11.891 | 99.99th=[ 9765] 00:10:11.891 bw ( KiB/s): min=56648, max=70960, per=98.55%, avg=66040.00, stdev=8136.79, samples=3 00:10:11.891 iops : min=14162, max=17740, avg=16510.00, stdev=2034.20, samples=3 00:10:11.891 write: IOPS=16.8k, BW=65.6MiB/s (68.8MB/s)(131MiB/2001msec); 0 zone resets 00:10:11.891 slat (nsec): min=4664, max=47403, avg=5950.18, stdev=1844.77 00:10:11.891 clat (usec): min=251, max=9782, avg=3802.90, stdev=607.59 00:10:11.891 lat (usec): min=257, max=9809, avg=3808.85, stdev=608.46 00:10:11.891 clat percentiles (usec): 00:10:11.891 | 1.00th=[ 2868], 5.00th=[ 3261], 10.00th=[ 3392], 20.00th=[ 3490], 00:10:11.891 | 30.00th=[ 3556], 40.00th=[ 3621], 50.00th=[ 3654], 60.00th=[ 3720], 00:10:11.891 | 70.00th=[ 3785], 80.00th=[ 4047], 90.00th=[ 4359], 95.00th=[ 4621], 00:10:11.891 | 99.00th=[ 6587], 99.50th=[ 7046], 99.90th=[ 8225], 99.95th=[ 8455], 00:10:11.891 | 99.99th=[ 9503] 00:10:11.891 bw ( KiB/s): min=56968, max=70496, per=98.16%, avg=65933.33, stdev=7764.62, samples=3 00:10:11.891 iops : min=14242, max=17624, avg=16483.33, stdev=1941.15, samples=3 00:10:11.891 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:10:11.891 lat (msec) : 2=0.05%, 4=79.24%, 10=20.66% 00:10:11.891 cpu : usr=99.00%, sys=0.05%, ctx=2, majf=0, minf=607 00:10:11.891 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:11.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:11.891 issued rwts: total=33522,33602,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.891 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:11.891 00:10:11.891 Run status group 0 (all jobs): 00:10:11.891 READ: bw=65.4MiB/s (68.6MB/s), 65.4MiB/s-65.4MiB/s (68.6MB/s-68.6MB/s), io=131MiB (137MB), run=2001-2001msec 00:10:11.891 WRITE: bw=65.6MiB/s (68.8MB/s), 65.6MiB/s-65.6MiB/s (68.8MB/s-68.8MB/s), io=131MiB (138MB), run=2001-2001msec 00:10:12.151 ----------------------------------------------------- 00:10:12.151 Suppressions used: 00:10:12.151 count bytes template 00:10:12.151 1 32 /usr/src/fio/parse.c 00:10:12.151 1 8 libtcmalloc_minimal.so 00:10:12.151 ----------------------------------------------------- 00:10:12.151 00:10:12.151 13:06:04 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:12.151 13:06:04 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:12.151 13:06:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:12.151 13:06:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:12.410 13:06:04 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:12.410 13:06:04 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:12.671 13:06:04 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:12.671 13:06:04 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:12.671 13:06:04 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:12.671 13:06:04 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:12.671 13:06:04 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:12.671 13:06:04 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:12.671 13:06:04 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:12.671 13:06:04 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:12.671 13:06:04 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:12.671 13:06:04 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:12.671 13:06:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:12.671 13:06:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:12.671 13:06:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:12.671 13:06:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:12.671 13:06:04 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:12.671 13:06:04 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:12.671 13:06:04 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:12.671 13:06:04 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:12.930 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:12.930 fio-3.35 00:10:12.930 Starting 1 thread 00:10:17.120 00:10:17.120 test: (groupid=0, jobs=1): err= 0: pid=69725: Thu Jul 25 13:06:08 2024 00:10:17.120 read: IOPS=16.4k, BW=64.1MiB/s (67.2MB/s)(128MiB/2001msec) 00:10:17.120 slat (nsec): min=4400, max=93507, avg=6051.51, stdev=1811.62 00:10:17.120 clat (usec): min=307, max=7818, avg=3881.42, stdev=522.63 00:10:17.120 lat (usec): min=313, max=7823, avg=3887.48, stdev=523.28 00:10:17.120 clat percentiles (usec): 00:10:17.120 | 1.00th=[ 3163], 5.00th=[ 3294], 10.00th=[ 3359], 20.00th=[ 3458], 00:10:17.120 | 30.00th=[ 3523], 40.00th=[ 3621], 50.00th=[ 3785], 60.00th=[ 4015], 00:10:17.120 | 70.00th=[ 4113], 80.00th=[ 4228], 90.00th=[ 4359], 95.00th=[ 4948], 00:10:17.120 | 99.00th=[ 5407], 99.50th=[ 5932], 99.90th=[ 7570], 99.95th=[ 7635], 00:10:17.120 | 99.99th=[ 7767] 00:10:17.120 bw ( KiB/s): min=64152, max=66968, per=100.00%, avg=66016.00, stdev=1614.40, samples=3 00:10:17.120 iops : min=16038, max=16742, avg=16504.00, stdev=403.60, samples=3 00:10:17.120 write: IOPS=16.4k, BW=64.2MiB/s (67.3MB/s)(129MiB/2001msec); 0 zone resets 00:10:17.120 slat (nsec): min=4481, max=68719, avg=6182.77, stdev=1801.86 00:10:17.120 clat (usec): min=358, max=7877, avg=3887.44, stdev=520.77 00:10:17.120 lat (usec): min=364, max=7882, avg=3893.63, stdev=521.41 00:10:17.120 clat percentiles (usec): 00:10:17.120 | 1.00th=[ 3163], 5.00th=[ 3326], 10.00th=[ 3392], 20.00th=[ 3458], 00:10:17.120 | 30.00th=[ 3523], 40.00th=[ 3621], 50.00th=[ 3785], 60.00th=[ 4015], 00:10:17.120 | 70.00th=[ 4113], 80.00th=[ 4228], 90.00th=[ 4359], 95.00th=[ 5014], 00:10:17.120 | 99.00th=[ 5473], 99.50th=[ 5866], 99.90th=[ 7570], 99.95th=[ 7701], 00:10:17.120 | 99.99th=[ 7767] 00:10:17.120 bw ( KiB/s): min=63888, max=67088, per=100.00%, avg=65901.33, stdev=1752.86, samples=3 00:10:17.120 iops : min=15972, max=16772, avg=16475.33, stdev=438.22, samples=3 00:10:17.120 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:10:17.120 lat (msec) : 2=0.05%, 4=57.92%, 10=42.00% 00:10:17.120 cpu : usr=98.95%, sys=0.10%, ctx=3, majf=0, minf=605 00:10:17.120 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:17.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:17.120 issued rwts: total=32840,32897,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.120 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:17.120 00:10:17.120 Run status group 0 (all jobs): 00:10:17.120 READ: bw=64.1MiB/s (67.2MB/s), 64.1MiB/s-64.1MiB/s (67.2MB/s-67.2MB/s), io=128MiB (135MB), run=2001-2001msec 00:10:17.120 WRITE: bw=64.2MiB/s (67.3MB/s), 64.2MiB/s-64.2MiB/s (67.3MB/s-67.3MB/s), io=129MiB (135MB), run=2001-2001msec 00:10:17.120 ----------------------------------------------------- 00:10:17.120 Suppressions used: 00:10:17.120 count bytes template 00:10:17.120 1 32 /usr/src/fio/parse.c 00:10:17.120 1 8 libtcmalloc_minimal.so 00:10:17.120 ----------------------------------------------------- 00:10:17.120 00:10:17.120 ************************************ 00:10:17.120 END TEST nvme_fio 00:10:17.120 ************************************ 00:10:17.120 13:06:09 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:17.120 13:06:09 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:10:17.120 00:10:17.120 real 0m17.589s 00:10:17.120 user 0m14.276s 00:10:17.120 sys 0m1.790s 00:10:17.120 13:06:09 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:17.120 13:06:09 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:10:17.120 ************************************ 00:10:17.120 END TEST nvme 00:10:17.120 ************************************ 00:10:17.120 00:10:17.120 real 1m31.160s 00:10:17.120 user 3m44.998s 00:10:17.120 sys 0m13.740s 00:10:17.120 13:06:09 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:17.120 13:06:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:17.120 13:06:09 -- spdk/autotest.sh@221 -- # [[ 0 -eq 1 ]] 00:10:17.120 13:06:09 -- spdk/autotest.sh@225 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:17.120 13:06:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:17.120 13:06:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:17.120 13:06:09 -- common/autotest_common.sh@10 -- # set +x 00:10:17.120 ************************************ 00:10:17.120 START TEST nvme_scc 00:10:17.120 ************************************ 00:10:17.120 13:06:09 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:17.379 * Looking for test storage... 00:10:17.379 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:17.379 13:06:09 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:17.379 13:06:09 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:17.379 13:06:09 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:17.379 13:06:09 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:17.379 13:06:09 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:17.379 13:06:09 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.379 13:06:09 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.379 13:06:09 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.379 13:06:09 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.379 13:06:09 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.379 13:06:09 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.379 13:06:09 nvme_scc -- paths/export.sh@5 -- # export PATH 00:10:17.379 13:06:09 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.379 13:06:09 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:10:17.379 13:06:09 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:17.379 13:06:09 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:10:17.379 13:06:09 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:17.379 13:06:09 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:10:17.379 13:06:09 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:17.379 13:06:09 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:17.379 13:06:09 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:17.379 13:06:09 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:10:17.379 13:06:09 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:17.379 13:06:09 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:10:17.379 13:06:09 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:17.379 13:06:09 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:17.379 13:06:09 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:17.637 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:17.895 Waiting for block devices as requested 00:10:17.895 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:17.895 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:18.153 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:18.153 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:23.435 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:23.435 13:06:15 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:23.435 13:06:15 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:23.435 13:06:15 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:23.435 13:06:15 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:23.435 13:06:15 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:23.435 13:06:15 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:23.435 13:06:15 nvme_scc -- scripts/common.sh@15 -- # local i 00:10:23.435 13:06:15 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:10:23.435 13:06:15 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:23.435 13:06:15 nvme_scc -- scripts/common.sh@24 -- # return 0 00:10:23.435 13:06:15 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:23.435 13:06:15 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:23.435 13:06:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:23.435 13:06:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:23.436 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.437 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:23.438 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:23.439 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:23.440 13:06:15 nvme_scc -- scripts/common.sh@15 -- # local i 00:10:23.440 13:06:15 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:10:23.440 13:06:15 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:23.440 13:06:15 nvme_scc -- scripts/common.sh@24 -- # return 0 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:23.440 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.441 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.442 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:23.443 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:23.444 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:23.445 13:06:15 nvme_scc -- scripts/common.sh@15 -- # local i 00:10:23.445 13:06:15 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:10:23.445 13:06:15 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:23.445 13:06:15 nvme_scc -- scripts/common.sh@24 -- # return 0 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:23.445 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:23.446 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.447 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.448 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.449 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.450 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.712 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.713 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.714 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:23.715 13:06:15 nvme_scc -- scripts/common.sh@15 -- # local i 00:10:23.715 13:06:15 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:10:23.715 13:06:15 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:23.715 13:06:15 nvme_scc -- scripts/common.sh@24 -- # return 0 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.715 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:23.716 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:23.717 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:23.718 13:06:15 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme1 00:10:23.718 13:06:15 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@197 -- # echo nvme1 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme3 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@197 -- # echo nvme3 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme2 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@197 -- # echo nvme2 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@206 -- # echo nvme1 00:10:23.719 13:06:15 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:10:23.719 13:06:15 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:23.719 13:06:15 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:10:23.719 13:06:15 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:24.286 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:24.852 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:24.852 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:24.852 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:24.852 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:25.110 13:06:17 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:25.110 13:06:17 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:25.110 13:06:17 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:25.110 13:06:17 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:25.110 ************************************ 00:10:25.110 START TEST nvme_simple_copy 00:10:25.110 ************************************ 00:10:25.110 13:06:17 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:25.368 Initializing NVMe Controllers 00:10:25.368 Attaching to 0000:00:10.0 00:10:25.368 Controller supports SCC. Attached to 0000:00:10.0 00:10:25.368 Namespace ID: 1 size: 6GB 00:10:25.368 Initialization complete. 00:10:25.368 00:10:25.368 Controller QEMU NVMe Ctrl (12340 ) 00:10:25.368 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:25.368 Namespace Block Size:4096 00:10:25.368 Writing LBAs 0 to 63 with Random Data 00:10:25.368 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:25.368 LBAs matching Written Data: 64 00:10:25.368 00:10:25.368 real 0m0.306s 00:10:25.368 user 0m0.128s 00:10:25.368 sys 0m0.076s 00:10:25.368 ************************************ 00:10:25.368 END TEST nvme_simple_copy 00:10:25.368 ************************************ 00:10:25.368 13:06:17 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:25.368 13:06:17 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:10:25.368 ************************************ 00:10:25.368 END TEST nvme_scc 00:10:25.368 ************************************ 00:10:25.368 00:10:25.368 real 0m8.127s 00:10:25.368 user 0m1.347s 00:10:25.368 sys 0m1.661s 00:10:25.368 13:06:17 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:25.368 13:06:17 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:25.368 13:06:17 -- spdk/autotest.sh@227 -- # [[ 0 -eq 1 ]] 00:10:25.368 13:06:17 -- spdk/autotest.sh@230 -- # [[ 0 -eq 1 ]] 00:10:25.368 13:06:17 -- spdk/autotest.sh@233 -- # [[ '' -eq 1 ]] 00:10:25.368 13:06:17 -- spdk/autotest.sh@236 -- # [[ 1 -eq 1 ]] 00:10:25.368 13:06:17 -- spdk/autotest.sh@237 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:25.368 13:06:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:25.368 13:06:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:25.368 13:06:17 -- common/autotest_common.sh@10 -- # set +x 00:10:25.368 ************************************ 00:10:25.368 START TEST nvme_fdp 00:10:25.368 ************************************ 00:10:25.368 13:06:17 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:10:25.368 * Looking for test storage... 00:10:25.368 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:25.368 13:06:17 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:25.368 13:06:17 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:25.368 13:06:17 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:25.368 13:06:17 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:25.368 13:06:17 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:25.368 13:06:17 nvme_fdp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:25.368 13:06:17 nvme_fdp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:25.368 13:06:17 nvme_fdp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:25.368 13:06:17 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.368 13:06:17 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.368 13:06:17 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.368 13:06:17 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:10:25.368 13:06:17 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.368 13:06:17 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:10:25.368 13:06:17 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:25.368 13:06:17 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:10:25.368 13:06:17 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:25.368 13:06:17 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:10:25.368 13:06:17 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:25.368 13:06:17 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:25.368 13:06:17 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:25.368 13:06:17 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:10:25.368 13:06:17 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:25.368 13:06:17 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:25.934 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:25.934 Waiting for block devices as requested 00:10:25.934 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:26.192 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:26.192 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:26.192 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:31.497 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:31.497 13:06:23 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:31.497 13:06:23 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:31.498 13:06:23 nvme_fdp -- scripts/common.sh@15 -- # local i 00:10:31.498 13:06:23 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:10:31.498 13:06:23 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:31.498 13:06:23 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.498 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.499 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:31.500 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.501 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:31.502 13:06:23 nvme_fdp -- scripts/common.sh@15 -- # local i 00:10:31.502 13:06:23 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:10:31.502 13:06:23 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:31.502 13:06:23 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:31.502 13:06:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.503 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.504 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:31.505 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:31.506 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:31.507 13:06:23 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:31.771 13:06:23 nvme_fdp -- scripts/common.sh@15 -- # local i 00:10:31.771 13:06:23 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:10:31.771 13:06:23 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:31.771 13:06:23 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:31.771 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.772 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:31.773 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:31.774 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:31.775 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:31.776 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.777 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.778 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:31.779 13:06:23 nvme_fdp -- scripts/common.sh@15 -- # local i 00:10:31.779 13:06:23 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:10:31.779 13:06:23 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:31.779 13:06:23 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:31.779 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.780 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:31.781 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:31.782 13:06:23 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@194 -- # [[ function == function ]] 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:10:31.782 13:06:23 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:10:31.783 13:06:23 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:10:31.783 13:06:23 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:10:31.783 13:06:23 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:31.783 13:06:23 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:31.783 13:06:23 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:31.783 13:06:23 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:31.783 13:06:23 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:31.783 13:06:23 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:31.783 13:06:23 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x88010 00:10:31.783 13:06:23 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:10:31.783 13:06:23 nvme_fdp -- nvme/functions.sh@197 -- # echo nvme3 00:10:31.783 13:06:23 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:31.783 13:06:23 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:10:31.783 13:06:23 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:10:31.783 13:06:23 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:10:31.783 13:06:23 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:10:31.783 13:06:23 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:31.783 13:06:23 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:31.783 13:06:23 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:31.783 13:06:23 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:31.783 13:06:23 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:31.783 13:06:23 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:31.783 13:06:23 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:10:31.783 13:06:23 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:10:31.783 13:06:23 nvme_fdp -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:10:31.783 13:06:23 nvme_fdp -- nvme/functions.sh@206 -- # echo nvme3 00:10:31.783 13:06:23 nvme_fdp -- nvme/functions.sh@207 -- # return 0 00:10:31.783 13:06:23 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:31.783 13:06:23 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:31.783 13:06:23 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:32.349 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:32.914 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:32.914 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:32.914 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:32.914 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:33.172 13:06:25 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:33.172 13:06:25 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:33.172 13:06:25 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:33.172 13:06:25 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:33.172 ************************************ 00:10:33.172 START TEST nvme_flexible_data_placement 00:10:33.172 ************************************ 00:10:33.172 13:06:25 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:33.430 Initializing NVMe Controllers 00:10:33.430 Attaching to 0000:00:13.0 00:10:33.430 Controller supports FDP Attached to 0000:00:13.0 00:10:33.430 Namespace ID: 1 Endurance Group ID: 1 00:10:33.430 Initialization complete. 00:10:33.430 00:10:33.430 ================================== 00:10:33.430 == FDP tests for Namespace: #01 == 00:10:33.430 ================================== 00:10:33.430 00:10:33.430 Get Feature: FDP: 00:10:33.430 ================= 00:10:33.430 Enabled: Yes 00:10:33.430 FDP configuration Index: 0 00:10:33.430 00:10:33.430 FDP configurations log page 00:10:33.430 =========================== 00:10:33.430 Number of FDP configurations: 1 00:10:33.430 Version: 0 00:10:33.430 Size: 112 00:10:33.430 FDP Configuration Descriptor: 0 00:10:33.430 Descriptor Size: 96 00:10:33.430 Reclaim Group Identifier format: 2 00:10:33.430 FDP Volatile Write Cache: Not Present 00:10:33.430 FDP Configuration: Valid 00:10:33.430 Vendor Specific Size: 0 00:10:33.430 Number of Reclaim Groups: 2 00:10:33.430 Number of Recalim Unit Handles: 8 00:10:33.430 Max Placement Identifiers: 128 00:10:33.430 Number of Namespaces Suppprted: 256 00:10:33.430 Reclaim unit Nominal Size: 6000000 bytes 00:10:33.430 Estimated Reclaim Unit Time Limit: Not Reported 00:10:33.430 RUH Desc #000: RUH Type: Initially Isolated 00:10:33.430 RUH Desc #001: RUH Type: Initially Isolated 00:10:33.430 RUH Desc #002: RUH Type: Initially Isolated 00:10:33.430 RUH Desc #003: RUH Type: Initially Isolated 00:10:33.430 RUH Desc #004: RUH Type: Initially Isolated 00:10:33.430 RUH Desc #005: RUH Type: Initially Isolated 00:10:33.431 RUH Desc #006: RUH Type: Initially Isolated 00:10:33.431 RUH Desc #007: RUH Type: Initially Isolated 00:10:33.431 00:10:33.431 FDP reclaim unit handle usage log page 00:10:33.431 ====================================== 00:10:33.431 Number of Reclaim Unit Handles: 8 00:10:33.431 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:33.431 RUH Usage Desc #001: RUH Attributes: Unused 00:10:33.431 RUH Usage Desc #002: RUH Attributes: Unused 00:10:33.431 RUH Usage Desc #003: RUH Attributes: Unused 00:10:33.431 RUH Usage Desc #004: RUH Attributes: Unused 00:10:33.431 RUH Usage Desc #005: RUH Attributes: Unused 00:10:33.431 RUH Usage Desc #006: RUH Attributes: Unused 00:10:33.431 RUH Usage Desc #007: RUH Attributes: Unused 00:10:33.431 00:10:33.431 FDP statistics log page 00:10:33.431 ======================= 00:10:33.431 Host bytes with metadata written: 768131072 00:10:33.431 Media bytes with metadata written: 768356352 00:10:33.431 Media bytes erased: 0 00:10:33.431 00:10:33.431 FDP Reclaim unit handle status 00:10:33.431 ============================== 00:10:33.431 Number of RUHS descriptors: 2 00:10:33.431 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000002374 00:10:33.431 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:33.431 00:10:33.431 FDP write on placement id: 0 success 00:10:33.431 00:10:33.431 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:33.431 00:10:33.431 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:33.431 00:10:33.431 Get Feature: FDP Events for Placement handle: #0 00:10:33.431 ======================== 00:10:33.431 Number of FDP Events: 6 00:10:33.431 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:33.431 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:33.431 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:33.431 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:33.431 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:33.431 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:33.431 00:10:33.431 FDP events log page 00:10:33.431 =================== 00:10:33.431 Number of FDP events: 1 00:10:33.431 FDP Event #0: 00:10:33.431 Event Type: RU Not Written to Capacity 00:10:33.431 Placement Identifier: Valid 00:10:33.431 NSID: Valid 00:10:33.431 Location: Valid 00:10:33.431 Placement Identifier: 0 00:10:33.431 Event Timestamp: 9 00:10:33.431 Namespace Identifier: 1 00:10:33.431 Reclaim Group Identifier: 0 00:10:33.431 Reclaim Unit Handle Identifier: 0 00:10:33.431 00:10:33.431 FDP test passed 00:10:33.431 00:10:33.431 real 0m0.290s 00:10:33.431 user 0m0.096s 00:10:33.431 sys 0m0.092s 00:10:33.431 13:06:25 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:33.431 13:06:25 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:33.431 ************************************ 00:10:33.431 END TEST nvme_flexible_data_placement 00:10:33.431 ************************************ 00:10:33.431 00:10:33.431 real 0m7.999s 00:10:33.431 user 0m1.319s 00:10:33.431 sys 0m1.627s 00:10:33.431 13:06:25 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:33.431 ************************************ 00:10:33.431 END TEST nvme_fdp 00:10:33.431 13:06:25 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:33.431 ************************************ 00:10:33.431 13:06:25 -- spdk/autotest.sh@240 -- # [[ '' -eq 1 ]] 00:10:33.431 13:06:25 -- spdk/autotest.sh@244 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:33.431 13:06:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:33.431 13:06:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:33.431 13:06:25 -- common/autotest_common.sh@10 -- # set +x 00:10:33.431 ************************************ 00:10:33.431 START TEST nvme_rpc 00:10:33.431 ************************************ 00:10:33.431 13:06:25 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:33.431 * Looking for test storage... 00:10:33.431 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:33.431 13:06:25 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:33.431 13:06:25 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:33.431 13:06:25 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=() 00:10:33.431 13:06:25 nvme_rpc -- common/autotest_common.sh@1524 -- # local bdfs 00:10:33.431 13:06:25 nvme_rpc -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:10:33.431 13:06:25 nvme_rpc -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:10:33.431 13:06:25 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=() 00:10:33.431 13:06:25 nvme_rpc -- common/autotest_common.sh@1513 -- # local bdfs 00:10:33.431 13:06:25 nvme_rpc -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:33.431 13:06:25 nvme_rpc -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:33.431 13:06:25 nvme_rpc -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:10:33.689 13:06:25 nvme_rpc -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:10:33.689 13:06:25 nvme_rpc -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:33.689 13:06:25 nvme_rpc -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:10:33.689 13:06:25 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:33.689 13:06:25 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=71067 00:10:33.689 13:06:25 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:33.689 13:06:25 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:33.689 13:06:25 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 71067 00:10:33.689 13:06:25 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 71067 ']' 00:10:33.689 13:06:25 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.689 13:06:25 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:33.689 13:06:25 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.689 13:06:25 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:33.689 13:06:25 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.689 [2024-07-25 13:06:25.793736] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:33.689 [2024-07-25 13:06:25.793990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71067 ] 00:10:33.948 [2024-07-25 13:06:25.987696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:34.215 [2024-07-25 13:06:26.215676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.215 [2024-07-25 13:06:26.215677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:34.783 13:06:26 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:34.783 13:06:26 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:34.783 13:06:26 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:35.348 Nvme0n1 00:10:35.348 13:06:27 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:35.348 13:06:27 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:35.606 request: 00:10:35.606 { 00:10:35.606 "bdev_name": "Nvme0n1", 00:10:35.606 "filename": "non_existing_file", 00:10:35.606 "method": "bdev_nvme_apply_firmware", 00:10:35.606 "req_id": 1 00:10:35.606 } 00:10:35.606 Got JSON-RPC error response 00:10:35.606 response: 00:10:35.606 { 00:10:35.606 "code": -32603, 00:10:35.606 "message": "open file failed." 00:10:35.606 } 00:10:35.606 13:06:27 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:35.606 13:06:27 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:35.606 13:06:27 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:35.865 13:06:27 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:35.865 13:06:27 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 71067 00:10:35.865 13:06:27 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 71067 ']' 00:10:35.865 13:06:27 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 71067 00:10:35.865 13:06:27 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:10:35.865 13:06:27 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:35.865 13:06:27 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71067 00:10:35.865 13:06:27 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:35.865 13:06:27 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:35.865 killing process with pid 71067 00:10:35.865 13:06:27 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71067' 00:10:35.865 13:06:27 nvme_rpc -- common/autotest_common.sh@969 -- # kill 71067 00:10:35.865 13:06:27 nvme_rpc -- common/autotest_common.sh@974 -- # wait 71067 00:10:37.764 00:10:37.764 real 0m4.327s 00:10:37.764 user 0m8.177s 00:10:37.764 sys 0m0.604s 00:10:37.764 13:06:29 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:37.764 13:06:29 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.764 ************************************ 00:10:37.764 END TEST nvme_rpc 00:10:37.764 ************************************ 00:10:37.764 13:06:29 -- spdk/autotest.sh@245 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:37.764 13:06:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:37.764 13:06:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:37.764 13:06:29 -- common/autotest_common.sh@10 -- # set +x 00:10:37.764 ************************************ 00:10:37.764 START TEST nvme_rpc_timeouts 00:10:37.764 ************************************ 00:10:37.764 13:06:29 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:37.764 * Looking for test storage... 00:10:38.021 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:38.021 13:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:38.021 13:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_71138 00:10:38.021 13:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_71138 00:10:38.021 13:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=71162 00:10:38.021 13:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:38.021 13:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:38.021 13:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 71162 00:10:38.021 13:06:29 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 71162 ']' 00:10:38.021 13:06:29 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.022 13:06:29 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:38.022 13:06:29 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.022 13:06:29 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:38.022 13:06:29 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:38.022 [2024-07-25 13:06:30.085957] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:38.022 [2024-07-25 13:06:30.086130] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71162 ] 00:10:38.280 [2024-07-25 13:06:30.243285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:38.280 [2024-07-25 13:06:30.436627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.280 [2024-07-25 13:06:30.436628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.214 Checking default timeout settings: 00:10:39.214 13:06:31 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:39.214 13:06:31 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:10:39.214 13:06:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:39.214 13:06:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:39.472 Making settings changes with rpc: 00:10:39.472 13:06:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:39.472 13:06:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:39.729 Check default vs. modified settings: 00:10:39.729 13:06:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:39.729 13:06:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:39.988 13:06:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:39.988 13:06:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:39.988 13:06:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_71138 00:10:39.988 13:06:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:39.988 13:06:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:39.988 13:06:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:40.246 13:06:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_71138 00:10:40.246 13:06:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:40.246 13:06:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:40.246 Setting action_on_timeout is changed as expected. 00:10:40.246 13:06:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:40.246 13:06:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:40.246 13:06:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:40.246 13:06:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:40.246 13:06:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_71138 00:10:40.246 13:06:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:40.246 13:06:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:40.246 13:06:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:40.246 13:06:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_71138 00:10:40.246 13:06:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:40.246 13:06:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:40.247 Setting timeout_us is changed as expected. 00:10:40.247 13:06:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:40.247 13:06:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:40.247 13:06:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:40.247 13:06:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:40.247 13:06:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_71138 00:10:40.247 13:06:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:40.247 13:06:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:40.247 13:06:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:40.247 13:06:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:40.247 13:06:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_71138 00:10:40.247 13:06:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:40.247 Setting timeout_admin_us is changed as expected. 00:10:40.247 13:06:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:40.247 13:06:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:40.247 13:06:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:40.247 13:06:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:40.247 13:06:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_71138 /tmp/settings_modified_71138 00:10:40.247 13:06:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 71162 00:10:40.247 13:06:32 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 71162 ']' 00:10:40.247 13:06:32 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 71162 00:10:40.247 13:06:32 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:10:40.247 13:06:32 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:40.247 13:06:32 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71162 00:10:40.247 killing process with pid 71162 00:10:40.247 13:06:32 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:40.247 13:06:32 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:40.247 13:06:32 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71162' 00:10:40.247 13:06:32 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 71162 00:10:40.247 13:06:32 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 71162 00:10:42.776 RPC TIMEOUT SETTING TEST PASSED. 00:10:42.776 13:06:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:42.776 00:10:42.776 real 0m4.459s 00:10:42.776 user 0m8.586s 00:10:42.776 sys 0m0.581s 00:10:42.776 13:06:34 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:42.776 13:06:34 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:42.776 ************************************ 00:10:42.776 END TEST nvme_rpc_timeouts 00:10:42.776 ************************************ 00:10:42.776 13:06:34 -- spdk/autotest.sh@247 -- # uname -s 00:10:42.776 13:06:34 -- spdk/autotest.sh@247 -- # '[' Linux = Linux ']' 00:10:42.776 13:06:34 -- spdk/autotest.sh@248 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:42.776 13:06:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:42.776 13:06:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:42.776 13:06:34 -- common/autotest_common.sh@10 -- # set +x 00:10:42.776 ************************************ 00:10:42.776 START TEST sw_hotplug 00:10:42.776 ************************************ 00:10:42.776 13:06:34 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:42.776 * Looking for test storage... 00:10:42.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:42.776 13:06:34 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:42.776 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:42.776 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:42.776 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:42.776 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:42.776 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:43.035 13:06:34 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:43.035 13:06:34 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:43.035 13:06:34 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:43.035 13:06:34 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:43.035 13:06:34 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:10:43.035 13:06:34 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:10:43.035 13:06:34 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:10:43.035 13:06:34 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:43.035 13:06:34 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:10:43.035 13:06:34 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:10:43.035 13:06:34 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:10:43.035 13:06:34 sw_hotplug -- scripts/common.sh@230 -- # local class 00:10:43.035 13:06:34 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:10:43.035 13:06:34 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:10:43.035 13:06:34 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:10:43.035 13:06:34 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:10:43.035 13:06:34 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:10:43.035 13:06:34 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:10:43.035 13:06:34 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:10:43.035 13:06:34 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:10:43.035 13:06:34 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:10:43.035 13:06:34 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:10:43.035 13:06:34 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:10:43.035 13:06:34 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:43.035 13:06:34 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:10:43.035 13:06:34 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:10:43.035 13:06:34 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:43.035 13:06:34 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:10:43.035 13:06:34 sw_hotplug -- scripts/common.sh@15 -- # local i 00:10:43.035 13:06:34 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:10:43.035 13:06:34 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:43.035 13:06:34 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:10:43.035 13:06:34 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:10:43.035 13:06:34 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:43.035 13:06:34 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:10:43.035 13:06:35 sw_hotplug -- scripts/common.sh@15 -- # local i 00:10:43.035 13:06:35 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:10:43.036 13:06:35 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:43.036 13:06:35 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:10:43.036 13:06:35 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:10:43.036 13:06:35 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:43.036 13:06:35 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:12.0 00:10:43.036 13:06:35 sw_hotplug -- scripts/common.sh@15 -- # local i 00:10:43.036 13:06:35 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:10:43.036 13:06:35 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:43.036 13:06:35 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:10:43.036 13:06:35 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:12.0 00:10:43.036 13:06:35 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:43.036 13:06:35 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:13.0 00:10:43.036 13:06:35 sw_hotplug -- scripts/common.sh@15 -- # local i 00:10:43.036 13:06:35 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:10:43.036 13:06:35 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:43.036 13:06:35 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:10:43.036 13:06:35 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:13.0 00:10:43.036 13:06:35 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:10:43.036 13:06:35 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:43.036 13:06:35 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:10:43.036 13:06:35 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:10:43.036 13:06:35 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:10:43.036 13:06:35 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:10:43.036 13:06:35 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:43.036 13:06:35 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:10:43.036 13:06:35 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:10:43.036 13:06:35 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:10:43.036 13:06:35 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:10:43.036 13:06:35 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:43.036 13:06:35 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:10:43.036 13:06:35 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:10:43.036 13:06:35 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:10:43.036 13:06:35 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:10:43.036 13:06:35 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:43.036 13:06:35 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:10:43.036 13:06:35 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:10:43.036 13:06:35 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:10:43.036 13:06:35 sw_hotplug -- scripts/common.sh@325 -- # (( 4 )) 00:10:43.036 13:06:35 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:43.036 13:06:35 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:43.036 13:06:35 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:43.036 13:06:35 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:43.294 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:43.552 Waiting for block devices as requested 00:10:43.552 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:43.552 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:43.817 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:43.817 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:49.117 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:49.117 13:06:40 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:49.117 13:06:40 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:49.374 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:49.374 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:49.374 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:49.632 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:49.891 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:49.891 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:50.149 13:06:42 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:50.149 13:06:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:50.149 13:06:42 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:50.149 13:06:42 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:50.149 13:06:42 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=72025 00:10:50.149 13:06:42 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:50.149 13:06:42 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:50.149 13:06:42 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:50.149 13:06:42 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:50.149 13:06:42 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:50.149 13:06:42 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:50.149 13:06:42 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:50.149 13:06:42 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:50.149 13:06:42 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:10:50.150 13:06:42 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:50.150 13:06:42 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:50.150 13:06:42 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:50.150 13:06:42 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:50.150 13:06:42 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:50.408 Initializing NVMe Controllers 00:10:50.408 Attaching to 0000:00:10.0 00:10:50.408 Attaching to 0000:00:11.0 00:10:50.408 Attached to 0000:00:10.0 00:10:50.408 Attached to 0000:00:11.0 00:10:50.408 Initialization complete. Starting I/O... 00:10:50.408 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:50.408 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:50.408 00:10:51.350 QEMU NVMe Ctrl (12340 ): 1119 I/Os completed (+1119) 00:10:51.350 QEMU NVMe Ctrl (12341 ): 1250 I/Os completed (+1250) 00:10:51.350 00:10:52.288 QEMU NVMe Ctrl (12340 ): 2441 I/Os completed (+1322) 00:10:52.289 QEMU NVMe Ctrl (12341 ): 2696 I/Os completed (+1446) 00:10:52.289 00:10:53.665 QEMU NVMe Ctrl (12340 ): 4175 I/Os completed (+1734) 00:10:53.665 QEMU NVMe Ctrl (12341 ): 4489 I/Os completed (+1793) 00:10:53.665 00:10:54.599 QEMU NVMe Ctrl (12340 ): 5803 I/Os completed (+1628) 00:10:54.599 QEMU NVMe Ctrl (12341 ): 6185 I/Os completed (+1696) 00:10:54.599 00:10:55.535 QEMU NVMe Ctrl (12340 ): 7448 I/Os completed (+1645) 00:10:55.535 QEMU NVMe Ctrl (12341 ): 7972 I/Os completed (+1787) 00:10:55.535 00:10:56.102 13:06:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:56.102 13:06:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:56.102 13:06:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:56.102 [2024-07-25 13:06:48.206869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:56.102 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:56.102 [2024-07-25 13:06:48.209172] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.102 [2024-07-25 13:06:48.209273] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.102 [2024-07-25 13:06:48.209315] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.102 [2024-07-25 13:06:48.209346] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.102 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:56.102 [2024-07-25 13:06:48.212708] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.102 [2024-07-25 13:06:48.212817] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.102 [2024-07-25 13:06:48.212861] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.102 [2024-07-25 13:06:48.212888] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.102 13:06:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:56.102 13:06:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:56.102 [2024-07-25 13:06:48.236034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:56.102 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:56.102 [2024-07-25 13:06:48.238221] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.102 [2024-07-25 13:06:48.238300] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.102 [2024-07-25 13:06:48.238351] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.102 [2024-07-25 13:06:48.238385] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.102 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:56.102 [2024-07-25 13:06:48.241546] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.102 [2024-07-25 13:06:48.241622] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.102 [2024-07-25 13:06:48.241656] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.102 [2024-07-25 13:06:48.241684] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.102 13:06:48 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:56.102 13:06:48 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:56.102 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:56.102 EAL: Scan for (pci) bus failed. 00:10:56.361 13:06:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:56.361 13:06:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:56.361 13:06:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:56.361 13:06:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:56.361 13:06:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:56.361 13:06:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:56.361 13:06:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:56.361 13:06:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:56.361 Attaching to 0000:00:10.0 00:10:56.361 Attached to 0000:00:10.0 00:10:56.361 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:56.361 00:10:56.361 13:06:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:56.361 13:06:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:56.361 13:06:48 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:56.361 Attaching to 0000:00:11.0 00:10:56.361 Attached to 0000:00:11.0 00:10:57.296 QEMU NVMe Ctrl (12340 ): 1601 I/Os completed (+1601) 00:10:57.297 QEMU NVMe Ctrl (12341 ): 1613 I/Os completed (+1613) 00:10:57.297 00:10:58.673 QEMU NVMe Ctrl (12340 ): 3213 I/Os completed (+1612) 00:10:58.673 QEMU NVMe Ctrl (12341 ): 3457 I/Os completed (+1844) 00:10:58.673 00:10:59.609 QEMU NVMe Ctrl (12340 ): 4921 I/Os completed (+1708) 00:10:59.609 QEMU NVMe Ctrl (12341 ): 5299 I/Os completed (+1842) 00:10:59.609 00:11:00.544 QEMU NVMe Ctrl (12340 ): 6745 I/Os completed (+1824) 00:11:00.544 QEMU NVMe Ctrl (12341 ): 7196 I/Os completed (+1897) 00:11:00.544 00:11:01.479 QEMU NVMe Ctrl (12340 ): 8259 I/Os completed (+1514) 00:11:01.479 QEMU NVMe Ctrl (12341 ): 8908 I/Os completed (+1712) 00:11:01.479 00:11:02.413 QEMU NVMe Ctrl (12340 ): 9963 I/Os completed (+1704) 00:11:02.413 QEMU NVMe Ctrl (12341 ): 10734 I/Os completed (+1826) 00:11:02.413 00:11:03.347 QEMU NVMe Ctrl (12340 ): 11759 I/Os completed (+1796) 00:11:03.347 QEMU NVMe Ctrl (12341 ): 12614 I/Os completed (+1880) 00:11:03.347 00:11:04.285 QEMU NVMe Ctrl (12340 ): 13471 I/Os completed (+1712) 00:11:04.285 QEMU NVMe Ctrl (12341 ): 14400 I/Os completed (+1786) 00:11:04.285 00:11:05.660 QEMU NVMe Ctrl (12340 ): 15147 I/Os completed (+1676) 00:11:05.660 QEMU NVMe Ctrl (12341 ): 16224 I/Os completed (+1824) 00:11:05.660 00:11:06.596 QEMU NVMe Ctrl (12340 ): 16815 I/Os completed (+1668) 00:11:06.596 QEMU NVMe Ctrl (12341 ): 18043 I/Os completed (+1819) 00:11:06.596 00:11:07.535 QEMU NVMe Ctrl (12340 ): 18523 I/Os completed (+1708) 00:11:07.535 QEMU NVMe Ctrl (12341 ): 19874 I/Os completed (+1831) 00:11:07.535 00:11:08.468 QEMU NVMe Ctrl (12340 ): 20235 I/Os completed (+1712) 00:11:08.468 QEMU NVMe Ctrl (12341 ): 21688 I/Os completed (+1814) 00:11:08.468 00:11:08.468 13:07:00 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:08.468 13:07:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:08.468 13:07:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:08.468 13:07:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:08.468 [2024-07-25 13:07:00.527320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:08.468 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:08.468 [2024-07-25 13:07:00.532510] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.468 [2024-07-25 13:07:00.532672] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.468 [2024-07-25 13:07:00.532769] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.468 [2024-07-25 13:07:00.532861] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.468 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:08.468 [2024-07-25 13:07:00.539863] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.468 [2024-07-25 13:07:00.540029] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.468 [2024-07-25 13:07:00.540142] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.468 [2024-07-25 13:07:00.540230] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.468 13:07:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:08.468 13:07:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:08.468 [2024-07-25 13:07:00.563089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:08.468 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:08.468 [2024-07-25 13:07:00.565588] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.468 [2024-07-25 13:07:00.565673] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.468 [2024-07-25 13:07:00.565730] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.468 [2024-07-25 13:07:00.565771] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.468 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:08.468 [2024-07-25 13:07:00.568574] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.468 [2024-07-25 13:07:00.568639] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.469 [2024-07-25 13:07:00.568686] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.469 [2024-07-25 13:07:00.568724] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.469 13:07:00 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:08.469 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:08.469 EAL: Scan for (pci) bus failed. 00:11:08.469 13:07:00 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:08.726 13:07:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:08.726 13:07:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:08.726 13:07:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:08.726 13:07:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:08.726 13:07:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:08.726 13:07:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:08.727 13:07:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:08.727 13:07:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:08.727 Attaching to 0000:00:10.0 00:11:08.727 Attached to 0000:00:10.0 00:11:08.727 13:07:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:08.727 13:07:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:08.727 13:07:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:08.727 Attaching to 0000:00:11.0 00:11:08.727 Attached to 0000:00:11.0 00:11:09.292 QEMU NVMe Ctrl (12340 ): 1220 I/Os completed (+1220) 00:11:09.292 QEMU NVMe Ctrl (12341 ): 1080 I/Os completed (+1080) 00:11:09.292 00:11:10.667 QEMU NVMe Ctrl (12340 ): 2936 I/Os completed (+1716) 00:11:10.667 QEMU NVMe Ctrl (12341 ): 2929 I/Os completed (+1849) 00:11:10.667 00:11:11.601 QEMU NVMe Ctrl (12340 ): 4624 I/Os completed (+1688) 00:11:11.601 QEMU NVMe Ctrl (12341 ): 4702 I/Os completed (+1773) 00:11:11.601 00:11:12.566 QEMU NVMe Ctrl (12340 ): 6327 I/Os completed (+1703) 00:11:12.566 QEMU NVMe Ctrl (12341 ): 6523 I/Os completed (+1821) 00:11:12.566 00:11:13.500 QEMU NVMe Ctrl (12340 ): 7904 I/Os completed (+1577) 00:11:13.500 QEMU NVMe Ctrl (12341 ): 8287 I/Os completed (+1764) 00:11:13.500 00:11:14.434 QEMU NVMe Ctrl (12340 ): 9583 I/Os completed (+1679) 00:11:14.434 QEMU NVMe Ctrl (12341 ): 10119 I/Os completed (+1832) 00:11:14.434 00:11:15.367 QEMU NVMe Ctrl (12340 ): 11299 I/Os completed (+1716) 00:11:15.367 QEMU NVMe Ctrl (12341 ): 12001 I/Os completed (+1882) 00:11:15.367 00:11:16.312 QEMU NVMe Ctrl (12340 ): 12855 I/Os completed (+1556) 00:11:16.312 QEMU NVMe Ctrl (12341 ): 13753 I/Os completed (+1752) 00:11:16.312 00:11:17.684 QEMU NVMe Ctrl (12340 ): 14477 I/Os completed (+1622) 00:11:17.684 QEMU NVMe Ctrl (12341 ): 15608 I/Os completed (+1855) 00:11:17.684 00:11:18.251 QEMU NVMe Ctrl (12340 ): 16055 I/Os completed (+1578) 00:11:18.251 QEMU NVMe Ctrl (12341 ): 17349 I/Os completed (+1741) 00:11:18.251 00:11:19.628 QEMU NVMe Ctrl (12340 ): 17768 I/Os completed (+1713) 00:11:19.628 QEMU NVMe Ctrl (12341 ): 19157 I/Os completed (+1808) 00:11:19.628 00:11:20.578 QEMU NVMe Ctrl (12340 ): 19461 I/Os completed (+1693) 00:11:20.578 QEMU NVMe Ctrl (12341 ): 21018 I/Os completed (+1861) 00:11:20.578 00:11:20.837 13:07:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:20.837 13:07:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:20.837 13:07:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:20.837 13:07:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:20.837 [2024-07-25 13:07:12.892741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:20.837 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:20.837 [2024-07-25 13:07:12.894640] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.837 [2024-07-25 13:07:12.894712] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.837 [2024-07-25 13:07:12.894743] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.837 [2024-07-25 13:07:12.894772] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.837 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:20.837 [2024-07-25 13:07:12.897603] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.837 [2024-07-25 13:07:12.897667] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.837 [2024-07-25 13:07:12.897694] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.837 [2024-07-25 13:07:12.897717] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.837 13:07:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:20.837 13:07:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:20.837 [2024-07-25 13:07:12.920418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:20.837 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:20.837 [2024-07-25 13:07:12.922418] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.837 [2024-07-25 13:07:12.922496] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.837 [2024-07-25 13:07:12.922532] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.837 [2024-07-25 13:07:12.922556] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.837 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:20.837 [2024-07-25 13:07:12.925252] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.837 [2024-07-25 13:07:12.925324] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.837 [2024-07-25 13:07:12.925373] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.837 [2024-07-25 13:07:12.925398] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.837 13:07:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:20.837 13:07:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:21.095 13:07:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:21.095 13:07:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:21.095 13:07:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:21.095 13:07:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:21.095 13:07:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:21.095 13:07:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:21.095 13:07:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:21.095 13:07:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:21.095 Attaching to 0000:00:10.0 00:11:21.095 Attached to 0000:00:10.0 00:11:21.095 13:07:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:21.095 13:07:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:21.095 13:07:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:21.095 Attaching to 0000:00:11.0 00:11:21.095 Attached to 0000:00:11.0 00:11:21.095 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:21.095 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:21.095 [2024-07-25 13:07:13.232426] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:11:33.299 13:07:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:33.299 13:07:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:33.299 13:07:25 sw_hotplug -- common/autotest_common.sh@717 -- # time=43.02 00:11:33.299 13:07:25 sw_hotplug -- common/autotest_common.sh@718 -- # echo 43.02 00:11:33.299 13:07:25 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:33.299 13:07:25 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.02 00:11:33.299 13:07:25 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.02 2 00:11:33.299 remove_attach_helper took 43.02s to complete (handling 2 nvme drive(s)) 13:07:25 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:39.864 13:07:31 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 72025 00:11:39.864 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (72025) - No such process 00:11:39.864 13:07:31 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 72025 00:11:39.864 13:07:31 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:39.864 13:07:31 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:39.864 13:07:31 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:39.864 13:07:31 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=72566 00:11:39.864 13:07:31 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:39.864 13:07:31 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 72566 00:11:39.864 13:07:31 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 72566 ']' 00:11:39.864 13:07:31 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.864 13:07:31 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:39.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.864 13:07:31 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.864 13:07:31 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:39.864 13:07:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:39.864 13:07:31 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:39.864 [2024-07-25 13:07:31.348187] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:39.864 [2024-07-25 13:07:31.348345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72566 ] 00:11:39.864 [2024-07-25 13:07:31.518524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.864 [2024-07-25 13:07:31.716028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.430 13:07:32 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:40.430 13:07:32 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:11:40.430 13:07:32 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:40.430 13:07:32 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.430 13:07:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:40.430 13:07:32 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.430 13:07:32 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:40.430 13:07:32 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:40.430 13:07:32 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:40.430 13:07:32 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:40.430 13:07:32 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:40.430 13:07:32 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:40.430 13:07:32 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:40.430 13:07:32 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:40.430 13:07:32 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:40.430 13:07:32 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:40.430 13:07:32 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:40.430 13:07:32 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:40.430 13:07:32 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:46.991 13:07:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:46.991 13:07:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:46.991 13:07:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:46.991 13:07:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:46.991 13:07:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:46.991 13:07:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:46.991 13:07:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:46.991 13:07:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:46.991 13:07:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:46.991 13:07:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:46.991 13:07:38 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.991 13:07:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:46.991 13:07:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:46.991 13:07:38 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.991 [2024-07-25 13:07:38.530755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:46.991 [2024-07-25 13:07:38.533675] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.991 [2024-07-25 13:07:38.533739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.991 [2024-07-25 13:07:38.533782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.991 [2024-07-25 13:07:38.533812] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.991 [2024-07-25 13:07:38.533833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.991 [2024-07-25 13:07:38.533850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.991 [2024-07-25 13:07:38.533868] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.991 [2024-07-25 13:07:38.533882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.991 [2024-07-25 13:07:38.533898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.991 [2024-07-25 13:07:38.533913] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.991 [2024-07-25 13:07:38.533931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.991 [2024-07-25 13:07:38.533946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.991 13:07:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:46.991 13:07:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:46.991 [2024-07-25 13:07:38.930769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:46.991 [2024-07-25 13:07:38.933792] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.991 [2024-07-25 13:07:38.933870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.991 [2024-07-25 13:07:38.933894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.991 [2024-07-25 13:07:38.933925] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.991 [2024-07-25 13:07:38.933942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.991 [2024-07-25 13:07:38.933959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.991 [2024-07-25 13:07:38.933974] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.991 [2024-07-25 13:07:38.933991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.992 [2024-07-25 13:07:38.934005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.992 [2024-07-25 13:07:38.934022] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.992 [2024-07-25 13:07:38.934036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.992 [2024-07-25 13:07:38.934052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.992 13:07:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:46.992 13:07:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:46.992 13:07:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:46.992 13:07:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:46.992 13:07:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:46.992 13:07:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:46.992 13:07:39 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.992 13:07:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:46.992 13:07:39 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.992 13:07:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:46.992 13:07:39 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:47.249 13:07:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:47.249 13:07:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:47.249 13:07:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:47.250 13:07:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:47.250 13:07:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:47.250 13:07:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:47.250 13:07:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:47.250 13:07:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:47.250 13:07:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:47.250 13:07:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:47.250 13:07:39 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:59.460 13:07:51 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:59.460 13:07:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:59.460 13:07:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:59.460 13:07:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:59.460 13:07:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:59.460 13:07:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:59.460 13:07:51 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.460 13:07:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:59.460 13:07:51 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.460 13:07:51 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:59.460 13:07:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:59.460 13:07:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:59.460 13:07:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:59.460 13:07:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:59.460 13:07:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:59.460 [2024-07-25 13:07:51.530953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:59.460 [2024-07-25 13:07:51.534076] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.460 [2024-07-25 13:07:51.534143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.460 [2024-07-25 13:07:51.534171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.460 [2024-07-25 13:07:51.534206] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.460 [2024-07-25 13:07:51.534228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.460 [2024-07-25 13:07:51.534243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.460 [2024-07-25 13:07:51.534261] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.460 [2024-07-25 13:07:51.534276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.460 [2024-07-25 13:07:51.534292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.460 [2024-07-25 13:07:51.534307] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.460 [2024-07-25 13:07:51.534323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.460 [2024-07-25 13:07:51.534337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.460 13:07:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:59.460 13:07:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:59.460 13:07:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:59.460 13:07:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:59.460 13:07:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:59.460 13:07:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:59.460 13:07:51 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.460 13:07:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:59.460 13:07:51 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.460 13:07:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:59.460 13:07:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:00.026 [2024-07-25 13:07:51.930978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:00.026 [2024-07-25 13:07:51.933987] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.026 [2024-07-25 13:07:51.934051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.026 [2024-07-25 13:07:51.934075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.026 [2024-07-25 13:07:51.934124] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.026 [2024-07-25 13:07:51.934144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.026 [2024-07-25 13:07:51.934197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.026 [2024-07-25 13:07:51.934216] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.026 [2024-07-25 13:07:51.934233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.026 [2024-07-25 13:07:51.934248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.026 [2024-07-25 13:07:51.934265] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.026 [2024-07-25 13:07:51.934279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.026 [2024-07-25 13:07:51.934295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.026 13:07:52 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:00.026 13:07:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:00.026 13:07:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:00.026 13:07:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:00.026 13:07:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:00.026 13:07:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:00.026 13:07:52 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.026 13:07:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:00.026 13:07:52 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.026 13:07:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:00.026 13:07:52 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:00.282 13:07:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:00.282 13:07:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:00.282 13:07:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:00.282 13:07:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:00.282 13:07:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:00.282 13:07:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:00.282 13:07:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:00.282 13:07:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:00.282 13:07:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:00.539 13:07:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:00.539 13:07:52 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:12.740 13:08:04 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:12.740 13:08:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:12.740 13:08:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:12.740 13:08:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:12.740 13:08:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:12.740 13:08:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:12.740 13:08:04 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.740 13:08:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:12.740 13:08:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.740 13:08:04 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:12.740 13:08:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:12.740 13:08:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:12.740 13:08:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:12.740 13:08:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:12.740 13:08:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:12.740 13:08:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:12.740 13:08:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:12.740 13:08:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:12.740 13:08:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:12.740 13:08:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:12.740 13:08:04 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.740 13:08:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:12.740 13:08:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:12.740 13:08:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.740 13:08:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:12.740 13:08:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:12.740 [2024-07-25 13:08:04.631168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:12.740 [2024-07-25 13:08:04.634724] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.740 [2024-07-25 13:08:04.634778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:12.740 [2024-07-25 13:08:04.634810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.740 [2024-07-25 13:08:04.634840] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.740 [2024-07-25 13:08:04.634863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:12.740 [2024-07-25 13:08:04.634879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.740 [2024-07-25 13:08:04.634905] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.740 [2024-07-25 13:08:04.634920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:12.740 [2024-07-25 13:08:04.634940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.740 [2024-07-25 13:08:04.634956] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.740 [2024-07-25 13:08:04.634976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:12.740 [2024-07-25 13:08:04.634991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.999 [2024-07-25 13:08:05.031185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:12.999 [2024-07-25 13:08:05.034633] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.999 [2024-07-25 13:08:05.034692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:12.999 [2024-07-25 13:08:05.034716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.999 [2024-07-25 13:08:05.034746] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.999 [2024-07-25 13:08:05.034762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:12.999 [2024-07-25 13:08:05.034779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.999 [2024-07-25 13:08:05.034794] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.999 [2024-07-25 13:08:05.034811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:12.999 [2024-07-25 13:08:05.034825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.999 [2024-07-25 13:08:05.034845] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.999 [2024-07-25 13:08:05.034859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:12.999 [2024-07-25 13:08:05.034875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.999 13:08:05 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:12.999 13:08:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:12.999 13:08:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:12.999 13:08:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:12.999 13:08:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:12.999 13:08:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:12.999 13:08:05 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.999 13:08:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:12.999 13:08:05 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.999 13:08:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:12.999 13:08:05 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:13.282 13:08:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:13.282 13:08:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:13.282 13:08:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:13.282 13:08:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:13.282 13:08:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:13.282 13:08:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:13.282 13:08:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:13.282 13:08:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:13.282 13:08:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:13.282 13:08:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:13.282 13:08:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:25.508 13:08:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:25.508 13:08:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:25.508 13:08:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:25.508 13:08:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:25.508 13:08:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:25.508 13:08:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:25.508 13:08:17 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.508 13:08:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:25.508 13:08:17 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.508 13:08:17 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:25.508 13:08:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:25.508 13:08:17 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.10 00:12:25.508 13:08:17 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.10 00:12:25.508 13:08:17 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:25.508 13:08:17 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.10 00:12:25.508 13:08:17 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.10 2 00:12:25.508 remove_attach_helper took 45.10s to complete (handling 2 nvme drive(s)) 13:08:17 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:12:25.508 13:08:17 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.508 13:08:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:25.508 13:08:17 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.508 13:08:17 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:25.508 13:08:17 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.508 13:08:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:25.508 13:08:17 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.508 13:08:17 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:12:25.508 13:08:17 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:25.508 13:08:17 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:25.508 13:08:17 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:12:25.508 13:08:17 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:12:25.508 13:08:17 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:12:25.508 13:08:17 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:12:25.508 13:08:17 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:12:25.508 13:08:17 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:25.508 13:08:17 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:25.508 13:08:17 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:25.508 13:08:17 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:25.508 13:08:17 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:32.067 13:08:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:32.067 13:08:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:32.067 13:08:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:32.067 13:08:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:32.067 13:08:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:32.067 13:08:23 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:32.067 13:08:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:32.067 13:08:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:32.067 13:08:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:32.067 13:08:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:32.067 13:08:23 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.067 13:08:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:32.067 13:08:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:32.067 13:08:23 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.067 [2024-07-25 13:08:23.658735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:32.067 [2024-07-25 13:08:23.660699] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:32.067 [2024-07-25 13:08:23.660751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.067 [2024-07-25 13:08:23.660777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.067 [2024-07-25 13:08:23.660807] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:32.067 [2024-07-25 13:08:23.660825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.067 [2024-07-25 13:08:23.660840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.067 [2024-07-25 13:08:23.660858] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:32.067 [2024-07-25 13:08:23.660872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.067 [2024-07-25 13:08:23.660888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.067 [2024-07-25 13:08:23.660903] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:32.067 [2024-07-25 13:08:23.660919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.067 [2024-07-25 13:08:23.660933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.067 13:08:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:32.067 13:08:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:32.067 [2024-07-25 13:08:24.158759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:32.067 [2024-07-25 13:08:24.161487] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:32.067 [2024-07-25 13:08:24.161547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.067 [2024-07-25 13:08:24.161570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.067 [2024-07-25 13:08:24.161601] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:32.067 [2024-07-25 13:08:24.161617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.067 [2024-07-25 13:08:24.161634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.067 [2024-07-25 13:08:24.161650] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:32.067 [2024-07-25 13:08:24.161669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.067 [2024-07-25 13:08:24.161683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.068 [2024-07-25 13:08:24.161700] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:32.068 [2024-07-25 13:08:24.161714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.068 [2024-07-25 13:08:24.161731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.068 13:08:24 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:32.068 13:08:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:32.068 13:08:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:32.068 13:08:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:32.068 13:08:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:32.068 13:08:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:32.068 13:08:24 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.068 13:08:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:32.068 13:08:24 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.068 13:08:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:32.068 13:08:24 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:32.326 13:08:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:32.326 13:08:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:32.326 13:08:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:32.326 13:08:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:32.326 13:08:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:32.326 13:08:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:32.326 13:08:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:32.326 13:08:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:32.583 13:08:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:32.583 13:08:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:32.583 13:08:24 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:44.805 13:08:36 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:44.805 13:08:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:44.805 13:08:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:44.805 13:08:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:44.805 13:08:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:44.805 13:08:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:44.805 13:08:36 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.805 13:08:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:44.805 13:08:36 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.805 13:08:36 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:44.805 13:08:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:44.805 13:08:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:44.805 13:08:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:44.805 13:08:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:44.805 13:08:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:44.805 [2024-07-25 13:08:36.658937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:44.805 [2024-07-25 13:08:36.661260] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:44.805 [2024-07-25 13:08:36.661437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:44.805 [2024-07-25 13:08:36.661685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.805 [2024-07-25 13:08:36.661864] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:44.805 [2024-07-25 13:08:36.661899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:44.805 [2024-07-25 13:08:36.661917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.805 [2024-07-25 13:08:36.661936] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:44.805 [2024-07-25 13:08:36.661951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:44.805 [2024-07-25 13:08:36.661968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.805 [2024-07-25 13:08:36.661983] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:44.805 [2024-07-25 13:08:36.661999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:44.805 [2024-07-25 13:08:36.662014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.805 13:08:36 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:44.805 13:08:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:44.805 13:08:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:44.805 13:08:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:44.805 13:08:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:44.805 13:08:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:44.805 13:08:36 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.805 13:08:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:44.805 13:08:36 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.805 13:08:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:44.805 13:08:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:45.064 [2024-07-25 13:08:37.058957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:45.064 [2024-07-25 13:08:37.061893] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:45.064 [2024-07-25 13:08:37.061954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:45.064 [2024-07-25 13:08:37.061979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.064 [2024-07-25 13:08:37.062008] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:45.064 [2024-07-25 13:08:37.062024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:45.064 [2024-07-25 13:08:37.062044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.064 [2024-07-25 13:08:37.062061] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:45.064 [2024-07-25 13:08:37.062078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:45.064 [2024-07-25 13:08:37.062092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.064 [2024-07-25 13:08:37.062133] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:45.064 [2024-07-25 13:08:37.062152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:45.064 [2024-07-25 13:08:37.062169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.064 13:08:37 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:45.064 13:08:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:45.064 13:08:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:45.064 13:08:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:45.064 13:08:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:45.064 13:08:37 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.064 13:08:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:45.064 13:08:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:45.064 13:08:37 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.323 13:08:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:45.323 13:08:37 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:45.323 13:08:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:45.323 13:08:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:45.323 13:08:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:45.323 13:08:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:45.323 13:08:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:45.323 13:08:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:45.323 13:08:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:45.323 13:08:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:45.582 13:08:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:45.582 13:08:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:45.582 13:08:37 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:57.790 13:08:49 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:57.790 13:08:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:57.790 13:08:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:57.790 13:08:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:57.790 13:08:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:57.790 13:08:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:57.790 13:08:49 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.790 13:08:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:57.790 13:08:49 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.790 13:08:49 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:57.790 13:08:49 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:57.790 13:08:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:57.790 13:08:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:57.790 13:08:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:57.790 13:08:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:57.790 [2024-07-25 13:08:49.659187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:57.790 [2024-07-25 13:08:49.661717] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:57.790 [2024-07-25 13:08:49.661915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:57.790 [2024-07-25 13:08:49.662162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.790 [2024-07-25 13:08:49.662374] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:57.790 [2024-07-25 13:08:49.662525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:57.790 [2024-07-25 13:08:49.662727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.790 [2024-07-25 13:08:49.662906] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:57.790 [2024-07-25 13:08:49.663079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:57.791 [2024-07-25 13:08:49.663271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.791 [2024-07-25 13:08:49.663468] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:57.791 [2024-07-25 13:08:49.663647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:57.791 [2024-07-25 13:08:49.663807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.791 13:08:49 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:57.791 13:08:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:57.791 13:08:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:57.791 13:08:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:57.791 13:08:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:57.791 13:08:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:57.791 13:08:49 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.791 13:08:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:57.791 13:08:49 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.791 13:08:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:57.791 13:08:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:58.049 [2024-07-25 13:08:50.059165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:58.049 [2024-07-25 13:08:50.062349] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:58.049 [2024-07-25 13:08:50.062592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:58.049 [2024-07-25 13:08:50.062852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:58.049 [2024-07-25 13:08:50.063040] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:58.049 [2024-07-25 13:08:50.063239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:58.049 [2024-07-25 13:08:50.063326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:58.049 [2024-07-25 13:08:50.063475] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:58.049 [2024-07-25 13:08:50.063533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:58.049 [2024-07-25 13:08:50.063721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:58.049 [2024-07-25 13:08:50.063980] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:58.049 [2024-07-25 13:08:50.064037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:58.049 [2024-07-25 13:08:50.064261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:58.049 [2024-07-25 13:08:50.064480] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:12:58.049 [2024-07-25 13:08:50.064653] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:12:58.049 [2024-07-25 13:08:50.064712] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:12:58.049 [2024-07-25 13:08:50.064831] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:12:58.049 13:08:50 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:58.049 13:08:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:58.049 13:08:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:58.049 13:08:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:58.049 13:08:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:58.049 13:08:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:58.049 13:08:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.049 13:08:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:58.049 13:08:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.308 13:08:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:58.308 13:08:50 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:58.308 13:08:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:58.308 13:08:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:58.308 13:08:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:58.565 13:08:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:58.565 13:08:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:58.566 13:08:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:58.566 13:08:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:58.566 13:08:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:58.566 13:08:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:58.566 13:08:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:58.566 13:08:50 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:10.764 13:09:02 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:10.764 13:09:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:10.764 13:09:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:10.764 13:09:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:10.764 13:09:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:10.764 13:09:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:10.764 13:09:02 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.764 13:09:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:10.764 13:09:02 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.764 13:09:02 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:10.764 13:09:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:10.764 13:09:02 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.13 00:13:10.764 13:09:02 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.13 00:13:10.764 13:09:02 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:13:10.764 13:09:02 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.13 00:13:10.764 13:09:02 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.13 2 00:13:10.764 remove_attach_helper took 45.13s to complete (handling 2 nvme drive(s)) 13:09:02 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:13:10.764 13:09:02 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 72566 00:13:10.764 13:09:02 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 72566 ']' 00:13:10.764 13:09:02 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 72566 00:13:10.764 13:09:02 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:13:10.764 13:09:02 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:10.764 13:09:02 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72566 00:13:10.764 killing process with pid 72566 00:13:10.764 13:09:02 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:10.764 13:09:02 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:10.764 13:09:02 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72566' 00:13:10.764 13:09:02 sw_hotplug -- common/autotest_common.sh@969 -- # kill 72566 00:13:10.764 13:09:02 sw_hotplug -- common/autotest_common.sh@974 -- # wait 72566 00:13:12.727 13:09:04 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:13.293 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:13.551 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:13.551 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:13.810 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:13.810 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:13.810 00:13:13.810 real 2m31.471s 00:13:13.810 user 1m51.350s 00:13:13.810 sys 0m19.884s 00:13:13.810 13:09:05 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:13.810 ************************************ 00:13:13.810 END TEST sw_hotplug 00:13:13.810 ************************************ 00:13:13.810 13:09:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:13.810 13:09:05 -- spdk/autotest.sh@251 -- # [[ 1 -eq 1 ]] 00:13:13.810 13:09:05 -- spdk/autotest.sh@252 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:13.810 13:09:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:13.810 13:09:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:13.810 13:09:05 -- common/autotest_common.sh@10 -- # set +x 00:13:13.810 ************************************ 00:13:13.810 START TEST nvme_xnvme 00:13:13.810 ************************************ 00:13:13.810 13:09:05 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:13.810 * Looking for test storage... 00:13:13.810 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:13.810 13:09:05 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:13.810 13:09:05 nvme_xnvme -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.810 13:09:05 nvme_xnvme -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.810 13:09:05 nvme_xnvme -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.068 13:09:05 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.068 13:09:05 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.069 13:09:05 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.069 13:09:05 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:14.069 13:09:05 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.069 13:09:06 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:13:14.069 13:09:06 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:14.069 13:09:06 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:14.069 13:09:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:14.069 ************************************ 00:13:14.069 START TEST xnvme_to_malloc_dd_copy 00:13:14.069 ************************************ 00:13:14.069 13:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:13:14.069 13:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:13:14.069 13:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:13:14.069 13:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:13:14.069 13:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:13:14.069 13:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:13:14.069 13:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:13:14.069 13:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:14.069 13:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:13:14.069 13:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:13:14.069 13:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:13:14.069 13:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:13:14.069 13:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:13:14.069 13:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:13:14.069 13:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:13:14.069 13:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:13:14.069 13:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:13:14.069 13:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:14.069 13:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:14.069 13:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:14.069 13:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:14.069 13:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:14.069 13:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:14.069 13:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:14.069 13:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:14.069 { 00:13:14.069 "subsystems": [ 00:13:14.069 { 00:13:14.069 "subsystem": "bdev", 00:13:14.069 "config": [ 00:13:14.069 { 00:13:14.069 "params": { 00:13:14.069 "block_size": 512, 00:13:14.069 "num_blocks": 2097152, 00:13:14.069 "name": "malloc0" 00:13:14.069 }, 00:13:14.069 "method": "bdev_malloc_create" 00:13:14.069 }, 00:13:14.069 { 00:13:14.069 "params": { 00:13:14.069 "io_mechanism": "libaio", 00:13:14.069 "filename": "/dev/nullb0", 00:13:14.069 "name": "null0" 00:13:14.069 }, 00:13:14.069 "method": "bdev_xnvme_create" 00:13:14.069 }, 00:13:14.069 { 00:13:14.069 "method": "bdev_wait_for_examine" 00:13:14.069 } 00:13:14.069 ] 00:13:14.069 } 00:13:14.069 ] 00:13:14.069 } 00:13:14.069 [2024-07-25 13:09:06.132302] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:14.069 [2024-07-25 13:09:06.132698] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73916 ] 00:13:14.328 [2024-07-25 13:09:06.308417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.586 [2024-07-25 13:09:06.534344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.405  Copying: 155/1024 [MB] (155 MBps) Copying: 319/1024 [MB] (164 MBps) Copying: 484/1024 [MB] (164 MBps) Copying: 646/1024 [MB] (162 MBps) Copying: 809/1024 [MB] (162 MBps) Copying: 974/1024 [MB] (164 MBps) Copying: 1024/1024 [MB] (average 162 MBps) 00:13:25.405 00:13:25.663 13:09:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:25.663 13:09:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:25.663 13:09:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:25.663 13:09:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:25.663 { 00:13:25.663 "subsystems": [ 00:13:25.663 { 00:13:25.663 "subsystem": "bdev", 00:13:25.663 "config": [ 00:13:25.663 { 00:13:25.663 "params": { 00:13:25.663 "block_size": 512, 00:13:25.663 "num_blocks": 2097152, 00:13:25.663 "name": "malloc0" 00:13:25.663 }, 00:13:25.663 "method": "bdev_malloc_create" 00:13:25.663 }, 00:13:25.663 { 00:13:25.663 "params": { 00:13:25.663 "io_mechanism": "libaio", 00:13:25.663 "filename": "/dev/nullb0", 00:13:25.663 "name": "null0" 00:13:25.663 }, 00:13:25.663 "method": "bdev_xnvme_create" 00:13:25.663 }, 00:13:25.663 { 00:13:25.663 "method": "bdev_wait_for_examine" 00:13:25.663 } 00:13:25.663 ] 00:13:25.663 } 00:13:25.663 ] 00:13:25.663 } 00:13:25.663 [2024-07-25 13:09:17.721186] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:25.663 [2024-07-25 13:09:17.721354] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74049 ] 00:13:25.921 [2024-07-25 13:09:17.894826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.180 [2024-07-25 13:09:18.127185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.646  Copying: 169/1024 [MB] (169 MBps) Copying: 339/1024 [MB] (169 MBps) Copying: 509/1024 [MB] (170 MBps) Copying: 677/1024 [MB] (167 MBps) Copying: 846/1024 [MB] (169 MBps) Copying: 1002/1024 [MB] (156 MBps) Copying: 1024/1024 [MB] (average 166 MBps) 00:13:37.646 00:13:37.646 13:09:29 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:37.646 13:09:29 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:37.646 13:09:29 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:37.646 13:09:29 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:37.646 13:09:29 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:37.646 13:09:29 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:37.646 { 00:13:37.646 "subsystems": [ 00:13:37.646 { 00:13:37.646 "subsystem": "bdev", 00:13:37.646 "config": [ 00:13:37.646 { 00:13:37.646 "params": { 00:13:37.646 "block_size": 512, 00:13:37.646 "num_blocks": 2097152, 00:13:37.646 "name": "malloc0" 00:13:37.646 }, 00:13:37.646 "method": "bdev_malloc_create" 00:13:37.646 }, 00:13:37.646 { 00:13:37.646 "params": { 00:13:37.646 "io_mechanism": "io_uring", 00:13:37.646 "filename": "/dev/nullb0", 00:13:37.646 "name": "null0" 00:13:37.646 }, 00:13:37.647 "method": "bdev_xnvme_create" 00:13:37.647 }, 00:13:37.647 { 00:13:37.647 "method": "bdev_wait_for_examine" 00:13:37.647 } 00:13:37.647 ] 00:13:37.647 } 00:13:37.647 ] 00:13:37.647 } 00:13:37.647 [2024-07-25 13:09:29.170470] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:37.647 [2024-07-25 13:09:29.170681] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74175 ] 00:13:37.647 [2024-07-25 13:09:29.337862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.647 [2024-07-25 13:09:29.564393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.850  Copying: 179/1024 [MB] (179 MBps) Copying: 362/1024 [MB] (182 MBps) Copying: 544/1024 [MB] (181 MBps) Copying: 723/1024 [MB] (179 MBps) Copying: 904/1024 [MB] (181 MBps) Copying: 1024/1024 [MB] (average 181 MBps) 00:13:47.850 00:13:47.850 13:09:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:47.850 13:09:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:47.850 13:09:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:47.850 13:09:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:47.850 { 00:13:47.850 "subsystems": [ 00:13:47.850 { 00:13:47.850 "subsystem": "bdev", 00:13:47.850 "config": [ 00:13:47.850 { 00:13:47.850 "params": { 00:13:47.850 "block_size": 512, 00:13:47.850 "num_blocks": 2097152, 00:13:47.850 "name": "malloc0" 00:13:47.850 }, 00:13:47.850 "method": "bdev_malloc_create" 00:13:47.850 }, 00:13:47.850 { 00:13:47.850 "params": { 00:13:47.850 "io_mechanism": "io_uring", 00:13:47.850 "filename": "/dev/nullb0", 00:13:47.850 "name": "null0" 00:13:47.850 }, 00:13:47.850 "method": "bdev_xnvme_create" 00:13:47.850 }, 00:13:47.850 { 00:13:47.851 "method": "bdev_wait_for_examine" 00:13:47.851 } 00:13:47.851 ] 00:13:47.851 } 00:13:47.851 ] 00:13:47.851 } 00:13:48.125 [2024-07-25 13:09:40.075689] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:48.125 [2024-07-25 13:09:40.075934] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74297 ] 00:13:48.125 [2024-07-25 13:09:40.255707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.399 [2024-07-25 13:09:40.490816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.455  Copying: 177/1024 [MB] (177 MBps) Copying: 359/1024 [MB] (182 MBps) Copying: 536/1024 [MB] (177 MBps) Copying: 715/1024 [MB] (178 MBps) Copying: 898/1024 [MB] (182 MBps) Copying: 1024/1024 [MB] (average 180 MBps) 00:13:59.455 00:13:59.455 13:09:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:13:59.455 13:09:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:13:59.455 ************************************ 00:13:59.455 END TEST xnvme_to_malloc_dd_copy 00:13:59.455 ************************************ 00:13:59.455 00:13:59.455 real 0m44.940s 00:13:59.455 user 0m39.438s 00:13:59.455 sys 0m4.905s 00:13:59.455 13:09:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:59.455 13:09:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:59.455 13:09:50 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:59.455 13:09:50 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:59.455 13:09:50 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:59.455 13:09:50 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:59.455 ************************************ 00:13:59.455 START TEST xnvme_bdevperf 00:13:59.455 ************************************ 00:13:59.455 13:09:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:13:59.455 13:09:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:13:59.455 13:09:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:13:59.455 13:09:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:13:59.455 13:09:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:13:59.455 13:09:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:13:59.455 13:09:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:59.455 13:09:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:13:59.455 13:09:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:13:59.455 13:09:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:13:59.455 13:09:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:13:59.455 13:09:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:13:59.455 13:09:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:13:59.455 13:09:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:59.455 13:09:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:59.455 13:09:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:59.455 13:09:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:59.455 13:09:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:59.455 13:09:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:59.455 13:09:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:59.455 13:09:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:59.455 { 00:13:59.455 "subsystems": [ 00:13:59.455 { 00:13:59.455 "subsystem": "bdev", 00:13:59.455 "config": [ 00:13:59.455 { 00:13:59.455 "params": { 00:13:59.455 "io_mechanism": "libaio", 00:13:59.455 "filename": "/dev/nullb0", 00:13:59.455 "name": "null0" 00:13:59.455 }, 00:13:59.455 "method": "bdev_xnvme_create" 00:13:59.455 }, 00:13:59.455 { 00:13:59.455 "method": "bdev_wait_for_examine" 00:13:59.455 } 00:13:59.455 ] 00:13:59.455 } 00:13:59.455 ] 00:13:59.455 } 00:13:59.455 [2024-07-25 13:09:51.104463] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:59.455 [2024-07-25 13:09:51.104643] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74440 ] 00:13:59.455 [2024-07-25 13:09:51.268789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.455 [2024-07-25 13:09:51.455418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.712 Running I/O for 5 seconds... 00:14:05.038 00:14:05.038 Latency(us) 00:14:05.038 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:05.038 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:05.038 null0 : 5.00 113563.47 443.61 0.00 0.00 560.03 175.94 949.53 00:14:05.038 =================================================================================================================== 00:14:05.038 Total : 113563.47 443.61 0.00 0.00 560.03 175.94 949.53 00:14:05.972 13:09:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:05.972 13:09:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:05.972 13:09:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:05.972 13:09:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:05.972 13:09:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:05.972 13:09:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:05.972 { 00:14:05.972 "subsystems": [ 00:14:05.972 { 00:14:05.972 "subsystem": "bdev", 00:14:05.972 "config": [ 00:14:05.972 { 00:14:05.972 "params": { 00:14:05.972 "io_mechanism": "io_uring", 00:14:05.972 "filename": "/dev/nullb0", 00:14:05.972 "name": "null0" 00:14:05.972 }, 00:14:05.972 "method": "bdev_xnvme_create" 00:14:05.972 }, 00:14:05.972 { 00:14:05.972 "method": "bdev_wait_for_examine" 00:14:05.972 } 00:14:05.972 ] 00:14:05.972 } 00:14:05.972 ] 00:14:05.972 } 00:14:05.972 [2024-07-25 13:09:57.987910] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:05.972 [2024-07-25 13:09:57.988339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74520 ] 00:14:06.230 [2024-07-25 13:09:58.162901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.230 [2024-07-25 13:09:58.349175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.487 Running I/O for 5 seconds... 00:14:11.775 00:14:11.775 Latency(us) 00:14:11.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.775 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:11.775 null0 : 5.00 150372.27 587.39 0.00 0.00 422.22 262.52 565.99 00:14:11.776 =================================================================================================================== 00:14:11.776 Total : 150372.27 587.39 0.00 0.00 422.22 262.52 565.99 00:14:12.710 13:10:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:14:12.710 13:10:04 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:14:12.710 ************************************ 00:14:12.710 END TEST xnvme_bdevperf 00:14:12.710 ************************************ 00:14:12.710 00:14:12.710 real 0m13.815s 00:14:12.710 user 0m10.801s 00:14:12.710 sys 0m2.768s 00:14:12.710 13:10:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:12.710 13:10:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:12.710 ************************************ 00:14:12.710 END TEST nvme_xnvme 00:14:12.710 ************************************ 00:14:12.710 00:14:12.710 real 0m58.939s 00:14:12.710 user 0m50.296s 00:14:12.710 sys 0m7.790s 00:14:12.710 13:10:04 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:12.711 13:10:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:12.711 13:10:04 -- spdk/autotest.sh@253 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:12.711 13:10:04 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:12.711 13:10:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:12.711 13:10:04 -- common/autotest_common.sh@10 -- # set +x 00:14:12.969 ************************************ 00:14:12.969 START TEST blockdev_xnvme 00:14:12.969 ************************************ 00:14:12.969 13:10:04 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:12.969 * Looking for test storage... 00:14:12.969 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:12.969 13:10:04 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:12.969 13:10:04 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:14:12.969 13:10:04 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:14:12.969 13:10:04 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:12.969 13:10:04 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:14:12.969 13:10:04 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:14:12.969 13:10:04 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:14:12.969 13:10:04 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:14:12.969 13:10:04 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:14:12.969 13:10:04 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:14:12.969 13:10:04 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:14:12.969 13:10:04 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:14:12.969 13:10:04 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:14:12.969 13:10:04 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:14:12.969 13:10:04 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:14:12.969 13:10:04 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:14:12.969 13:10:04 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:14:12.969 13:10:04 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:14:12.969 13:10:04 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:14:12.969 13:10:04 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:14:12.969 13:10:04 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:14:12.969 13:10:04 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:14:12.969 13:10:04 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:14:12.969 13:10:04 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:14:12.969 13:10:04 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74660 00:14:12.969 13:10:04 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:14:12.969 13:10:04 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:12.969 13:10:04 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 74660 00:14:12.969 13:10:04 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 74660 ']' 00:14:12.969 13:10:04 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.969 13:10:04 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:12.969 13:10:04 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.969 13:10:04 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:12.969 13:10:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:12.969 [2024-07-25 13:10:05.120915] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:12.969 [2024-07-25 13:10:05.121346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74660 ] 00:14:13.227 [2024-07-25 13:10:05.286697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.485 [2024-07-25 13:10:05.477394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.052 13:10:06 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:14.052 13:10:06 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:14:14.052 13:10:06 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:14:14.052 13:10:06 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:14:14.052 13:10:06 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:14:14.052 13:10:06 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:14:14.052 13:10:06 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:14.618 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:14.618 Waiting for block devices as requested 00:14:14.618 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:14.618 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:14.876 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:14.876 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:20.145 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:20.145 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@1670 -- # local nvme bdf 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:20.145 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:20.145 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:14:20.145 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:20.145 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:20.145 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:20.145 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:14:20.145 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:20.145 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:20.145 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:20.145 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:14:20.145 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:20.145 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:20.145 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:20.145 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:14:20.145 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:20.145 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:20.145 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:20.145 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:14:20.145 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:20.145 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:20.145 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:20.145 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:14:20.145 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:20.145 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:20.145 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:14:20.145 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:20.145 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:14:20.145 nvme0n1 00:14:20.145 nvme1n1 00:14:20.145 nvme2n1 00:14:20.145 nvme2n2 00:14:20.145 nvme2n3 00:14:20.145 nvme3n1 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.145 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.145 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:14:20.145 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.145 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.145 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.145 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:14:20.145 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.145 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:20.145 13:10:12 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.145 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:14:20.145 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:14:20.146 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "a59d0b0b-182e-43d9-a90f-6f4f701137b5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "a59d0b0b-182e-43d9-a90f-6f4f701137b5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "96f58e58-1724-40e9-8dd4-e1d5406cbd7a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "96f58e58-1724-40e9-8dd4-e1d5406cbd7a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "bb301f30-7232-4760-9540-78d51f16d6d1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bb301f30-7232-4760-9540-78d51f16d6d1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "f86a8fb4-3692-48de-b816-8feb19ea1eb1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f86a8fb4-3692-48de-b816-8feb19ea1eb1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "2b737d3c-57f7-45c6-9951-df5568ec4d88"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2b737d3c-57f7-45c6-9951-df5568ec4d88",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "72dfa4e0-e75b-487d-b48a-52fa3b980e4c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "72dfa4e0-e75b-487d-b48a-52fa3b980e4c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:14:20.404 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:14:20.404 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:14:20.404 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:14:20.404 13:10:12 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 74660 00:14:20.404 13:10:12 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 74660 ']' 00:14:20.404 13:10:12 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 74660 00:14:20.404 13:10:12 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:14:20.404 13:10:12 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:20.404 13:10:12 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74660 00:14:20.404 13:10:12 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:20.404 13:10:12 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:20.404 killing process with pid 74660 00:14:20.404 13:10:12 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74660' 00:14:20.404 13:10:12 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 74660 00:14:20.404 13:10:12 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 74660 00:14:22.936 13:10:14 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:22.936 13:10:14 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:22.936 13:10:14 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:22.936 13:10:14 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:22.936 13:10:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:22.936 ************************************ 00:14:22.936 START TEST bdev_hello_world 00:14:22.936 ************************************ 00:14:22.936 13:10:14 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:22.936 [2024-07-25 13:10:14.613441] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:22.936 [2024-07-25 13:10:14.613606] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75024 ] 00:14:22.936 [2024-07-25 13:10:14.817363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.936 [2024-07-25 13:10:15.066203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.515 [2024-07-25 13:10:15.456349] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:14:23.515 [2024-07-25 13:10:15.456419] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:14:23.515 [2024-07-25 13:10:15.456451] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:14:23.515 [2024-07-25 13:10:15.458711] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:14:23.515 [2024-07-25 13:10:15.458966] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:14:23.515 [2024-07-25 13:10:15.458995] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:14:23.515 [2024-07-25 13:10:15.459217] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:14:23.515 00:14:23.515 [2024-07-25 13:10:15.459248] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:14:24.450 00:14:24.450 real 0m2.063s 00:14:24.450 user 0m1.750s 00:14:24.450 sys 0m0.195s 00:14:24.450 ************************************ 00:14:24.450 END TEST bdev_hello_world 00:14:24.450 ************************************ 00:14:24.450 13:10:16 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:24.450 13:10:16 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:14:24.450 13:10:16 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:14:24.450 13:10:16 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:24.450 13:10:16 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:24.450 13:10:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:24.708 ************************************ 00:14:24.708 START TEST bdev_bounds 00:14:24.709 ************************************ 00:14:24.709 13:10:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:14:24.709 13:10:16 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=75075 00:14:24.709 13:10:16 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:24.709 13:10:16 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:14:24.709 Process bdevio pid: 75075 00:14:24.709 13:10:16 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 75075' 00:14:24.709 13:10:16 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 75075 00:14:24.709 13:10:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 75075 ']' 00:14:24.709 13:10:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:24.709 13:10:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:24.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:24.709 13:10:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:24.709 13:10:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:24.709 13:10:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:24.709 [2024-07-25 13:10:16.739335] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:24.709 [2024-07-25 13:10:16.739540] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75075 ] 00:14:24.967 [2024-07-25 13:10:16.917683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:25.225 [2024-07-25 13:10:17.182935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.225 [2024-07-25 13:10:17.183009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:25.225 [2024-07-25 13:10:17.183009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.795 13:10:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:25.795 13:10:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:14:25.795 13:10:17 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:14:25.795 I/O targets: 00:14:25.795 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:14:25.795 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:14:25.795 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:25.795 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:25.795 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:25.795 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:14:25.795 00:14:25.795 00:14:25.795 CUnit - A unit testing framework for C - Version 2.1-3 00:14:25.795 http://cunit.sourceforge.net/ 00:14:25.795 00:14:25.795 00:14:25.795 Suite: bdevio tests on: nvme3n1 00:14:25.795 Test: blockdev write read block ...passed 00:14:25.795 Test: blockdev write zeroes read block ...passed 00:14:25.795 Test: blockdev write zeroes read no split ...passed 00:14:25.795 Test: blockdev write zeroes read split ...passed 00:14:25.795 Test: blockdev write zeroes read split partial ...passed 00:14:25.795 Test: blockdev reset ...passed 00:14:25.795 Test: blockdev write read 8 blocks ...passed 00:14:25.795 Test: blockdev write read size > 128k ...passed 00:14:25.795 Test: blockdev write read invalid size ...passed 00:14:25.795 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:25.795 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:25.795 Test: blockdev write read max offset ...passed 00:14:25.795 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:25.795 Test: blockdev writev readv 8 blocks ...passed 00:14:25.795 Test: blockdev writev readv 30 x 1block ...passed 00:14:25.795 Test: blockdev writev readv block ...passed 00:14:25.795 Test: blockdev writev readv size > 128k ...passed 00:14:25.795 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:25.795 Test: blockdev comparev and writev ...passed 00:14:25.795 Test: blockdev nvme passthru rw ...passed 00:14:25.795 Test: blockdev nvme passthru vendor specific ...passed 00:14:25.795 Test: blockdev nvme admin passthru ...passed 00:14:25.795 Test: blockdev copy ...passed 00:14:25.795 Suite: bdevio tests on: nvme2n3 00:14:25.795 Test: blockdev write read block ...passed 00:14:25.795 Test: blockdev write zeroes read block ...passed 00:14:25.795 Test: blockdev write zeroes read no split ...passed 00:14:25.795 Test: blockdev write zeroes read split ...passed 00:14:25.795 Test: blockdev write zeroes read split partial ...passed 00:14:25.795 Test: blockdev reset ...passed 00:14:25.795 Test: blockdev write read 8 blocks ...passed 00:14:25.795 Test: blockdev write read size > 128k ...passed 00:14:25.795 Test: blockdev write read invalid size ...passed 00:14:25.795 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:25.795 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:25.795 Test: blockdev write read max offset ...passed 00:14:25.795 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:25.795 Test: blockdev writev readv 8 blocks ...passed 00:14:25.795 Test: blockdev writev readv 30 x 1block ...passed 00:14:25.795 Test: blockdev writev readv block ...passed 00:14:26.055 Test: blockdev writev readv size > 128k ...passed 00:14:26.055 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:26.055 Test: blockdev comparev and writev ...passed 00:14:26.055 Test: blockdev nvme passthru rw ...passed 00:14:26.055 Test: blockdev nvme passthru vendor specific ...passed 00:14:26.055 Test: blockdev nvme admin passthru ...passed 00:14:26.055 Test: blockdev copy ...passed 00:14:26.055 Suite: bdevio tests on: nvme2n2 00:14:26.055 Test: blockdev write read block ...passed 00:14:26.055 Test: blockdev write zeroes read block ...passed 00:14:26.055 Test: blockdev write zeroes read no split ...passed 00:14:26.055 Test: blockdev write zeroes read split ...passed 00:14:26.055 Test: blockdev write zeroes read split partial ...passed 00:14:26.055 Test: blockdev reset ...passed 00:14:26.055 Test: blockdev write read 8 blocks ...passed 00:14:26.055 Test: blockdev write read size > 128k ...passed 00:14:26.055 Test: blockdev write read invalid size ...passed 00:14:26.055 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:26.055 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:26.055 Test: blockdev write read max offset ...passed 00:14:26.055 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:26.055 Test: blockdev writev readv 8 blocks ...passed 00:14:26.055 Test: blockdev writev readv 30 x 1block ...passed 00:14:26.055 Test: blockdev writev readv block ...passed 00:14:26.055 Test: blockdev writev readv size > 128k ...passed 00:14:26.055 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:26.055 Test: blockdev comparev and writev ...passed 00:14:26.055 Test: blockdev nvme passthru rw ...passed 00:14:26.055 Test: blockdev nvme passthru vendor specific ...passed 00:14:26.055 Test: blockdev nvme admin passthru ...passed 00:14:26.055 Test: blockdev copy ...passed 00:14:26.055 Suite: bdevio tests on: nvme2n1 00:14:26.055 Test: blockdev write read block ...passed 00:14:26.055 Test: blockdev write zeroes read block ...passed 00:14:26.055 Test: blockdev write zeroes read no split ...passed 00:14:26.055 Test: blockdev write zeroes read split ...passed 00:14:26.055 Test: blockdev write zeroes read split partial ...passed 00:14:26.055 Test: blockdev reset ...passed 00:14:26.055 Test: blockdev write read 8 blocks ...passed 00:14:26.055 Test: blockdev write read size > 128k ...passed 00:14:26.055 Test: blockdev write read invalid size ...passed 00:14:26.055 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:26.055 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:26.055 Test: blockdev write read max offset ...passed 00:14:26.055 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:26.055 Test: blockdev writev readv 8 blocks ...passed 00:14:26.055 Test: blockdev writev readv 30 x 1block ...passed 00:14:26.055 Test: blockdev writev readv block ...passed 00:14:26.055 Test: blockdev writev readv size > 128k ...passed 00:14:26.055 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:26.055 Test: blockdev comparev and writev ...passed 00:14:26.055 Test: blockdev nvme passthru rw ...passed 00:14:26.055 Test: blockdev nvme passthru vendor specific ...passed 00:14:26.055 Test: blockdev nvme admin passthru ...passed 00:14:26.055 Test: blockdev copy ...passed 00:14:26.055 Suite: bdevio tests on: nvme1n1 00:14:26.055 Test: blockdev write read block ...passed 00:14:26.055 Test: blockdev write zeroes read block ...passed 00:14:26.055 Test: blockdev write zeroes read no split ...passed 00:14:26.055 Test: blockdev write zeroes read split ...passed 00:14:26.055 Test: blockdev write zeroes read split partial ...passed 00:14:26.055 Test: blockdev reset ...passed 00:14:26.055 Test: blockdev write read 8 blocks ...passed 00:14:26.055 Test: blockdev write read size > 128k ...passed 00:14:26.055 Test: blockdev write read invalid size ...passed 00:14:26.055 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:26.055 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:26.055 Test: blockdev write read max offset ...passed 00:14:26.055 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:26.055 Test: blockdev writev readv 8 blocks ...passed 00:14:26.055 Test: blockdev writev readv 30 x 1block ...passed 00:14:26.055 Test: blockdev writev readv block ...passed 00:14:26.055 Test: blockdev writev readv size > 128k ...passed 00:14:26.055 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:26.055 Test: blockdev comparev and writev ...passed 00:14:26.055 Test: blockdev nvme passthru rw ...passed 00:14:26.055 Test: blockdev nvme passthru vendor specific ...passed 00:14:26.055 Test: blockdev nvme admin passthru ...passed 00:14:26.055 Test: blockdev copy ...passed 00:14:26.055 Suite: bdevio tests on: nvme0n1 00:14:26.055 Test: blockdev write read block ...passed 00:14:26.055 Test: blockdev write zeroes read block ...passed 00:14:26.055 Test: blockdev write zeroes read no split ...passed 00:14:26.314 Test: blockdev write zeroes read split ...passed 00:14:26.314 Test: blockdev write zeroes read split partial ...passed 00:14:26.314 Test: blockdev reset ...passed 00:14:26.314 Test: blockdev write read 8 blocks ...passed 00:14:26.314 Test: blockdev write read size > 128k ...passed 00:14:26.314 Test: blockdev write read invalid size ...passed 00:14:26.314 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:26.314 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:26.314 Test: blockdev write read max offset ...passed 00:14:26.314 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:26.314 Test: blockdev writev readv 8 blocks ...passed 00:14:26.314 Test: blockdev writev readv 30 x 1block ...passed 00:14:26.314 Test: blockdev writev readv block ...passed 00:14:26.314 Test: blockdev writev readv size > 128k ...passed 00:14:26.314 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:26.314 Test: blockdev comparev and writev ...passed 00:14:26.314 Test: blockdev nvme passthru rw ...passed 00:14:26.314 Test: blockdev nvme passthru vendor specific ...passed 00:14:26.314 Test: blockdev nvme admin passthru ...passed 00:14:26.314 Test: blockdev copy ...passed 00:14:26.314 00:14:26.314 Run Summary: Type Total Ran Passed Failed Inactive 00:14:26.314 suites 6 6 n/a 0 0 00:14:26.314 tests 138 138 138 0 0 00:14:26.314 asserts 780 780 780 0 n/a 00:14:26.314 00:14:26.314 Elapsed time = 1.325 seconds 00:14:26.314 0 00:14:26.314 13:10:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 75075 00:14:26.314 13:10:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 75075 ']' 00:14:26.314 13:10:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 75075 00:14:26.314 13:10:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:14:26.314 13:10:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:26.314 13:10:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75075 00:14:26.314 13:10:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:26.314 13:10:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:26.314 13:10:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75075' 00:14:26.314 killing process with pid 75075 00:14:26.314 13:10:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 75075 00:14:26.314 13:10:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 75075 00:14:27.687 13:10:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:14:27.687 00:14:27.687 real 0m2.812s 00:14:27.687 user 0m6.608s 00:14:27.687 sys 0m0.369s 00:14:27.687 13:10:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:27.687 13:10:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:27.687 ************************************ 00:14:27.687 END TEST bdev_bounds 00:14:27.687 ************************************ 00:14:27.687 13:10:19 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:14:27.687 13:10:19 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:27.687 13:10:19 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:27.688 13:10:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:27.688 ************************************ 00:14:27.688 START TEST bdev_nbd 00:14:27.688 ************************************ 00:14:27.688 13:10:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:14:27.688 13:10:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:14:27.688 13:10:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:14:27.688 13:10:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:27.688 13:10:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:27.688 13:10:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:27.688 13:10:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:14:27.688 13:10:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:14:27.688 13:10:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:14:27.688 13:10:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:27.688 13:10:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:14:27.688 13:10:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:14:27.688 13:10:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:27.688 13:10:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:14:27.688 13:10:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:27.688 13:10:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:14:27.688 13:10:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=75134 00:14:27.688 13:10:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:14:27.688 13:10:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 75134 /var/tmp/spdk-nbd.sock 00:14:27.688 13:10:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 75134 ']' 00:14:27.688 13:10:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:27.688 13:10:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:27.688 13:10:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:27.688 13:10:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:27.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:27.688 13:10:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:27.688 13:10:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:27.688 [2024-07-25 13:10:19.612080] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:27.688 [2024-07-25 13:10:19.612502] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:27.688 [2024-07-25 13:10:19.787491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.945 [2024-07-25 13:10:19.992686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.511 13:10:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:28.511 13:10:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:14:28.511 13:10:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:14:28.511 13:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:28.511 13:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:28.511 13:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:14:28.511 13:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:14:28.511 13:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:28.511 13:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:28.511 13:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:14:28.511 13:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:14:28.511 13:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:14:28.511 13:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:14:28.511 13:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:28.511 13:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:14:28.769 13:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:14:28.769 13:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:14:28.769 13:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:14:28.769 13:10:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:28.769 13:10:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:28.769 13:10:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:28.769 13:10:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:28.769 13:10:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:28.769 13:10:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:28.769 13:10:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:28.769 13:10:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:28.769 13:10:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:28.769 1+0 records in 00:14:28.770 1+0 records out 00:14:28.770 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426184 s, 9.6 MB/s 00:14:28.770 13:10:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:28.770 13:10:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:28.770 13:10:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:28.770 13:10:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:28.770 13:10:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:28.770 13:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:28.770 13:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:28.770 13:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:14:29.028 13:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:14:29.028 13:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:14:29.028 13:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:14:29.028 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:29.028 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:29.028 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:29.028 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:29.028 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:29.028 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:29.028 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:29.028 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:29.028 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:29.028 1+0 records in 00:14:29.028 1+0 records out 00:14:29.028 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000799267 s, 5.1 MB/s 00:14:29.028 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:29.028 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:29.028 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:29.028 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:29.028 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:29.028 13:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:29.028 13:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:29.028 13:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:14:29.596 13:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:14:29.596 13:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:14:29.596 13:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:14:29.596 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:14:29.596 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:29.596 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:29.596 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:29.596 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:14:29.596 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:29.596 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:29.596 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:29.596 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:29.596 1+0 records in 00:14:29.596 1+0 records out 00:14:29.596 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000533369 s, 7.7 MB/s 00:14:29.596 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:29.596 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:29.596 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:29.596 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:29.596 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:29.596 13:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:29.596 13:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:29.596 13:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:14:29.596 13:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:14:29.596 13:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:14:29.596 13:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:14:29.596 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:14:29.596 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:29.596 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:29.596 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:29.596 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:14:29.596 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:29.596 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:29.596 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:29.597 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:29.597 1+0 records in 00:14:29.597 1+0 records out 00:14:29.597 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00055839 s, 7.3 MB/s 00:14:29.597 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:29.855 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:29.855 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:29.855 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:29.855 13:10:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:29.855 13:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:29.855 13:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:29.855 13:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:14:30.114 13:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:14:30.114 13:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:14:30.114 13:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:14:30.114 13:10:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:14:30.114 13:10:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:30.114 13:10:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:30.114 13:10:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:30.114 13:10:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:14:30.114 13:10:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:30.114 13:10:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:30.114 13:10:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:30.114 13:10:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:30.114 1+0 records in 00:14:30.114 1+0 records out 00:14:30.114 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000696817 s, 5.9 MB/s 00:14:30.114 13:10:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:30.114 13:10:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:30.114 13:10:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:30.114 13:10:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:30.114 13:10:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:30.114 13:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:30.114 13:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:30.114 13:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:14:30.373 13:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:14:30.373 13:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:14:30.373 13:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:14:30.373 13:10:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:14:30.373 13:10:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:30.373 13:10:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:30.373 13:10:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:30.373 13:10:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:14:30.373 13:10:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:30.373 13:10:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:30.373 13:10:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:30.373 13:10:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:30.373 1+0 records in 00:14:30.373 1+0 records out 00:14:30.373 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000673152 s, 6.1 MB/s 00:14:30.373 13:10:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:30.373 13:10:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:30.373 13:10:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:30.373 13:10:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:30.373 13:10:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:30.373 13:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:30.373 13:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:30.373 13:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:30.633 13:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:14:30.633 { 00:14:30.633 "nbd_device": "/dev/nbd0", 00:14:30.633 "bdev_name": "nvme0n1" 00:14:30.633 }, 00:14:30.633 { 00:14:30.633 "nbd_device": "/dev/nbd1", 00:14:30.633 "bdev_name": "nvme1n1" 00:14:30.633 }, 00:14:30.633 { 00:14:30.633 "nbd_device": "/dev/nbd2", 00:14:30.633 "bdev_name": "nvme2n1" 00:14:30.633 }, 00:14:30.633 { 00:14:30.633 "nbd_device": "/dev/nbd3", 00:14:30.633 "bdev_name": "nvme2n2" 00:14:30.633 }, 00:14:30.633 { 00:14:30.633 "nbd_device": "/dev/nbd4", 00:14:30.633 "bdev_name": "nvme2n3" 00:14:30.633 }, 00:14:30.633 { 00:14:30.633 "nbd_device": "/dev/nbd5", 00:14:30.633 "bdev_name": "nvme3n1" 00:14:30.633 } 00:14:30.633 ]' 00:14:30.633 13:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:14:30.633 13:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:14:30.633 13:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:14:30.633 { 00:14:30.633 "nbd_device": "/dev/nbd0", 00:14:30.633 "bdev_name": "nvme0n1" 00:14:30.633 }, 00:14:30.633 { 00:14:30.633 "nbd_device": "/dev/nbd1", 00:14:30.633 "bdev_name": "nvme1n1" 00:14:30.633 }, 00:14:30.633 { 00:14:30.633 "nbd_device": "/dev/nbd2", 00:14:30.633 "bdev_name": "nvme2n1" 00:14:30.633 }, 00:14:30.633 { 00:14:30.633 "nbd_device": "/dev/nbd3", 00:14:30.633 "bdev_name": "nvme2n2" 00:14:30.633 }, 00:14:30.633 { 00:14:30.633 "nbd_device": "/dev/nbd4", 00:14:30.633 "bdev_name": "nvme2n3" 00:14:30.633 }, 00:14:30.633 { 00:14:30.633 "nbd_device": "/dev/nbd5", 00:14:30.633 "bdev_name": "nvme3n1" 00:14:30.633 } 00:14:30.633 ]' 00:14:30.633 13:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:14:30.633 13:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:30.633 13:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:14:30.633 13:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:30.633 13:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:30.633 13:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:30.633 13:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:30.892 13:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:30.892 13:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:30.892 13:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:30.892 13:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:30.892 13:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:30.892 13:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:30.892 13:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:30.892 13:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:30.892 13:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:30.892 13:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:31.151 13:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:31.151 13:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:31.151 13:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:31.151 13:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:31.151 13:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:31.151 13:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:31.151 13:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:31.151 13:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:31.151 13:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:31.151 13:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:14:31.408 13:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:14:31.408 13:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:14:31.408 13:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:14:31.408 13:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:31.408 13:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:31.408 13:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:14:31.408 13:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:31.408 13:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:31.408 13:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:31.408 13:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:14:31.666 13:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:14:31.666 13:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:14:31.666 13:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:14:31.666 13:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:31.666 13:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:31.667 13:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:14:31.667 13:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:31.667 13:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:31.667 13:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:31.667 13:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:14:31.925 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:14:31.925 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:14:31.925 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:14:31.925 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:31.925 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:31.925 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:14:31.925 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:31.925 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:31.925 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:31.925 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:14:32.183 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:14:32.183 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:14:32.183 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:14:32.183 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:32.183 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:32.183 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:14:32.184 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:32.184 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:32.184 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:32.184 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:32.184 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:32.751 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:32.751 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:32.751 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:32.751 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:32.751 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:32.751 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:32.751 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:32.751 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:32.751 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:32.751 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:14:32.751 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:14:32.751 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:14:32.751 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:32.751 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:32.751 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:32.751 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:32.751 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:32.751 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:32.751 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:32.751 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:32.751 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:32.751 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:32.751 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:32.751 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:32.751 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:14:32.751 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:32.751 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:32.751 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:14:33.010 /dev/nbd0 00:14:33.010 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:33.010 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:33.010 13:10:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:33.010 13:10:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:33.010 13:10:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:33.010 13:10:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:33.010 13:10:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:33.010 13:10:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:33.010 13:10:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:33.010 13:10:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:33.010 13:10:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:33.010 1+0 records in 00:14:33.010 1+0 records out 00:14:33.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349007 s, 11.7 MB/s 00:14:33.011 13:10:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.011 13:10:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:33.011 13:10:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.011 13:10:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:33.011 13:10:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:33.011 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:33.011 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:33.011 13:10:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:14:33.269 /dev/nbd1 00:14:33.269 13:10:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:33.269 13:10:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:33.269 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:33.269 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:33.269 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:33.269 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:33.269 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:33.269 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:33.269 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:33.269 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:33.269 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:33.269 1+0 records in 00:14:33.269 1+0 records out 00:14:33.269 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477997 s, 8.6 MB/s 00:14:33.269 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.269 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:33.269 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.269 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:33.269 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:33.269 13:10:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:33.269 13:10:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:33.269 13:10:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:14:33.528 /dev/nbd10 00:14:33.528 13:10:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:14:33.528 13:10:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:14:33.528 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:14:33.528 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:33.528 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:33.528 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:33.528 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:14:33.528 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:33.528 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:33.528 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:33.528 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:33.528 1+0 records in 00:14:33.528 1+0 records out 00:14:33.528 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427379 s, 9.6 MB/s 00:14:33.528 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.528 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:33.528 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.528 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:33.528 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:33.528 13:10:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:33.528 13:10:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:33.528 13:10:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:14:33.787 /dev/nbd11 00:14:33.787 13:10:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:14:33.787 13:10:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:14:33.787 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:14:33.787 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:33.787 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:33.787 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:33.787 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:14:33.787 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:33.787 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:33.787 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:33.787 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:33.787 1+0 records in 00:14:33.787 1+0 records out 00:14:33.787 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000674732 s, 6.1 MB/s 00:14:33.787 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.787 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:33.787 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.787 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:33.787 13:10:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:33.787 13:10:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:33.787 13:10:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:33.787 13:10:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:14:34.046 /dev/nbd12 00:14:34.046 13:10:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:14:34.046 13:10:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:14:34.046 13:10:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:14:34.046 13:10:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:34.046 13:10:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:34.046 13:10:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:34.046 13:10:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:14:34.046 13:10:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:34.046 13:10:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:34.046 13:10:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:34.046 13:10:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:34.046 1+0 records in 00:14:34.046 1+0 records out 00:14:34.046 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000500019 s, 8.2 MB/s 00:14:34.046 13:10:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:34.046 13:10:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:34.046 13:10:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:34.046 13:10:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:34.046 13:10:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:34.046 13:10:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:34.046 13:10:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:34.046 13:10:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:14:34.305 /dev/nbd13 00:14:34.305 13:10:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:14:34.305 13:10:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:14:34.305 13:10:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:14:34.305 13:10:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:34.305 13:10:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:34.305 13:10:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:34.305 13:10:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:14:34.305 13:10:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:34.305 13:10:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:34.305 13:10:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:34.305 13:10:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:34.305 1+0 records in 00:14:34.305 1+0 records out 00:14:34.305 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000665435 s, 6.2 MB/s 00:14:34.305 13:10:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:34.305 13:10:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:34.305 13:10:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:34.305 13:10:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:34.305 13:10:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:34.305 13:10:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:34.305 13:10:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:34.305 13:10:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:34.305 13:10:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:34.305 13:10:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:34.564 13:10:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:34.564 { 00:14:34.564 "nbd_device": "/dev/nbd0", 00:14:34.564 "bdev_name": "nvme0n1" 00:14:34.564 }, 00:14:34.564 { 00:14:34.564 "nbd_device": "/dev/nbd1", 00:14:34.564 "bdev_name": "nvme1n1" 00:14:34.564 }, 00:14:34.564 { 00:14:34.564 "nbd_device": "/dev/nbd10", 00:14:34.564 "bdev_name": "nvme2n1" 00:14:34.564 }, 00:14:34.564 { 00:14:34.564 "nbd_device": "/dev/nbd11", 00:14:34.564 "bdev_name": "nvme2n2" 00:14:34.564 }, 00:14:34.564 { 00:14:34.564 "nbd_device": "/dev/nbd12", 00:14:34.564 "bdev_name": "nvme2n3" 00:14:34.564 }, 00:14:34.564 { 00:14:34.564 "nbd_device": "/dev/nbd13", 00:14:34.564 "bdev_name": "nvme3n1" 00:14:34.564 } 00:14:34.564 ]' 00:14:34.564 13:10:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:34.564 { 00:14:34.564 "nbd_device": "/dev/nbd0", 00:14:34.564 "bdev_name": "nvme0n1" 00:14:34.564 }, 00:14:34.564 { 00:14:34.564 "nbd_device": "/dev/nbd1", 00:14:34.564 "bdev_name": "nvme1n1" 00:14:34.564 }, 00:14:34.564 { 00:14:34.564 "nbd_device": "/dev/nbd10", 00:14:34.564 "bdev_name": "nvme2n1" 00:14:34.564 }, 00:14:34.564 { 00:14:34.564 "nbd_device": "/dev/nbd11", 00:14:34.564 "bdev_name": "nvme2n2" 00:14:34.564 }, 00:14:34.564 { 00:14:34.564 "nbd_device": "/dev/nbd12", 00:14:34.564 "bdev_name": "nvme2n3" 00:14:34.564 }, 00:14:34.564 { 00:14:34.564 "nbd_device": "/dev/nbd13", 00:14:34.564 "bdev_name": "nvme3n1" 00:14:34.564 } 00:14:34.564 ]' 00:14:34.564 13:10:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:34.823 13:10:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:34.823 /dev/nbd1 00:14:34.823 /dev/nbd10 00:14:34.823 /dev/nbd11 00:14:34.823 /dev/nbd12 00:14:34.823 /dev/nbd13' 00:14:34.823 13:10:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:34.823 /dev/nbd1 00:14:34.823 /dev/nbd10 00:14:34.823 /dev/nbd11 00:14:34.823 /dev/nbd12 00:14:34.823 /dev/nbd13' 00:14:34.823 13:10:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:34.823 13:10:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:14:34.823 13:10:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:14:34.823 13:10:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:14:34.823 13:10:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:14:34.823 13:10:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:14:34.823 13:10:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:34.823 13:10:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:34.823 13:10:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:34.823 13:10:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:34.823 13:10:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:34.823 13:10:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:14:34.823 256+0 records in 00:14:34.823 256+0 records out 00:14:34.823 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104728 s, 100 MB/s 00:14:34.823 13:10:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:34.823 13:10:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:34.823 256+0 records in 00:14:34.823 256+0 records out 00:14:34.823 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155208 s, 6.8 MB/s 00:14:34.823 13:10:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:34.823 13:10:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:35.082 256+0 records in 00:14:35.082 256+0 records out 00:14:35.082 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149306 s, 7.0 MB/s 00:14:35.082 13:10:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:35.082 13:10:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:14:35.082 256+0 records in 00:14:35.082 256+0 records out 00:14:35.082 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13723 s, 7.6 MB/s 00:14:35.082 13:10:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:35.082 13:10:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:14:35.340 256+0 records in 00:14:35.340 256+0 records out 00:14:35.340 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143448 s, 7.3 MB/s 00:14:35.340 13:10:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:35.340 13:10:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:14:35.599 256+0 records in 00:14:35.599 256+0 records out 00:14:35.599 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140397 s, 7.5 MB/s 00:14:35.599 13:10:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:35.599 13:10:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:14:35.599 256+0 records in 00:14:35.599 256+0 records out 00:14:35.599 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131581 s, 8.0 MB/s 00:14:35.599 13:10:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:14:35.599 13:10:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:35.599 13:10:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:35.599 13:10:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:35.599 13:10:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:35.599 13:10:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:35.599 13:10:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:35.599 13:10:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:35.599 13:10:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:14:35.599 13:10:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:35.599 13:10:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:14:35.599 13:10:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:35.599 13:10:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:14:35.599 13:10:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:35.599 13:10:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:14:35.599 13:10:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:35.599 13:10:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:14:35.599 13:10:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:35.599 13:10:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:14:35.599 13:10:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:35.599 13:10:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:35.599 13:10:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:35.599 13:10:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:35.599 13:10:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:35.599 13:10:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:35.599 13:10:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:35.599 13:10:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:35.864 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:35.864 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:35.864 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:35.864 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:35.864 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:35.864 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:35.864 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:35.864 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:35.864 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:35.864 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:36.430 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:36.430 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:36.430 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:36.430 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:36.430 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:36.430 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:36.430 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:36.430 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:36.430 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:36.430 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:14:36.688 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:14:36.688 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:14:36.688 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:14:36.688 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:36.688 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:36.688 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:14:36.688 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:36.688 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:36.688 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:36.688 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:14:37.017 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:14:37.017 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:14:37.017 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:14:37.017 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:37.017 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:37.017 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:14:37.017 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:37.017 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:37.017 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:37.017 13:10:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:14:37.275 13:10:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:14:37.275 13:10:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:14:37.275 13:10:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:14:37.275 13:10:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:37.275 13:10:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:37.275 13:10:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:14:37.275 13:10:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:37.275 13:10:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:37.275 13:10:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:37.275 13:10:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:14:37.534 13:10:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:14:37.534 13:10:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:14:37.534 13:10:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:14:37.534 13:10:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:37.534 13:10:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:37.534 13:10:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:14:37.534 13:10:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:37.534 13:10:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:37.534 13:10:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:37.534 13:10:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:37.534 13:10:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:37.792 13:10:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:37.792 13:10:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:37.792 13:10:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:37.792 13:10:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:37.792 13:10:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:37.792 13:10:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:37.792 13:10:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:37.792 13:10:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:37.792 13:10:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:37.792 13:10:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:14:37.792 13:10:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:37.792 13:10:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:14:37.792 13:10:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:37.792 13:10:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:37.793 13:10:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:37.793 13:10:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:14:37.793 13:10:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:14:37.793 13:10:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:14:38.051 malloc_lvol_verify 00:14:38.051 13:10:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:14:38.309 2ad742c9-9690-4aac-8dec-6294585b6994 00:14:38.309 13:10:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:14:38.568 ee6fc1d4-f080-4a56-82ad-709be6cbb154 00:14:38.568 13:10:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:14:38.826 /dev/nbd0 00:14:38.826 13:10:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:14:38.826 mke2fs 1.46.5 (30-Dec-2021) 00:14:38.826 Discarding device blocks: 0/4096 done 00:14:38.826 Creating filesystem with 4096 1k blocks and 1024 inodes 00:14:38.826 00:14:38.826 Allocating group tables: 0/1 done 00:14:38.826 Writing inode tables: 0/1 done 00:14:38.826 Creating journal (1024 blocks): done 00:14:38.826 Writing superblocks and filesystem accounting information: 0/1 done 00:14:38.826 00:14:38.826 13:10:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:14:38.826 13:10:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:38.826 13:10:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:38.826 13:10:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:38.826 13:10:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:38.826 13:10:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:38.826 13:10:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:38.826 13:10:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:39.083 13:10:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:39.083 13:10:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:39.083 13:10:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:39.083 13:10:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:39.083 13:10:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:39.083 13:10:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:39.083 13:10:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:39.083 13:10:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:39.083 13:10:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:14:39.083 13:10:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:14:39.083 13:10:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 75134 00:14:39.083 13:10:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 75134 ']' 00:14:39.083 13:10:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 75134 00:14:39.083 13:10:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:14:39.083 13:10:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:39.083 13:10:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75134 00:14:39.372 13:10:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:39.373 killing process with pid 75134 00:14:39.373 13:10:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:39.373 13:10:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75134' 00:14:39.373 13:10:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 75134 00:14:39.373 13:10:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 75134 00:14:40.747 ************************************ 00:14:40.747 END TEST bdev_nbd 00:14:40.747 ************************************ 00:14:40.747 13:10:32 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:14:40.747 00:14:40.747 real 0m13.041s 00:14:40.747 user 0m18.604s 00:14:40.747 sys 0m4.154s 00:14:40.747 13:10:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:40.747 13:10:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:40.747 13:10:32 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:14:40.747 13:10:32 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:14:40.747 13:10:32 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:14:40.747 13:10:32 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:14:40.747 13:10:32 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:40.747 13:10:32 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:40.747 13:10:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:40.747 ************************************ 00:14:40.747 START TEST bdev_fio 00:14:40.747 ************************************ 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:14:40.747 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:40.747 ************************************ 00:14:40.747 START TEST bdev_fio_rw_verify 00:14:40.747 ************************************ 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:40.747 13:10:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:40.748 13:10:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:14:40.748 13:10:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:40.748 13:10:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:40.748 13:10:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:14:40.748 13:10:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:40.748 13:10:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:40.748 13:10:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:40.748 13:10:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:40.748 13:10:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:14:40.748 13:10:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:40.748 13:10:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:40.748 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:40.748 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:40.748 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:40.748 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:40.748 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:40.748 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:40.748 fio-3.35 00:14:40.748 Starting 6 threads 00:14:52.943 00:14:52.943 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=75563: Thu Jul 25 13:10:43 2024 00:14:52.943 read: IOPS=25.6k, BW=99.9MiB/s (105MB/s)(999MiB/10001msec) 00:14:52.943 slat (usec): min=3, max=852, avg= 7.60, stdev= 5.01 00:14:52.943 clat (usec): min=112, max=12701, avg=730.00, stdev=387.28 00:14:52.943 lat (usec): min=123, max=12710, avg=737.60, stdev=387.81 00:14:52.943 clat percentiles (usec): 00:14:52.943 | 50.000th=[ 734], 99.000th=[ 1450], 99.900th=[ 5080], 99.990th=[12518], 00:14:52.943 | 99.999th=[12649] 00:14:52.943 write: IOPS=25.8k, BW=101MiB/s (106MB/s)(1009MiB/10001msec); 0 zone resets 00:14:52.943 slat (usec): min=14, max=3592, avg=29.59, stdev=30.77 00:14:52.943 clat (usec): min=90, max=12882, avg=821.89, stdev=378.82 00:14:52.943 lat (usec): min=126, max=12903, avg=851.48, stdev=381.21 00:14:52.943 clat percentiles (usec): 00:14:52.943 | 50.000th=[ 824], 99.000th=[ 1614], 99.900th=[ 4817], 99.990th=[11731], 00:14:52.943 | 99.999th=[12780] 00:14:52.943 bw ( KiB/s): min=74160, max=132912, per=99.57%, avg=102876.68, stdev=2235.84, samples=114 00:14:52.943 iops : min=18540, max=33228, avg=25718.95, stdev=558.95, samples=114 00:14:52.943 lat (usec) : 100=0.01%, 250=1.90%, 500=14.54%, 750=29.21%, 1000=39.21% 00:14:52.943 lat (msec) : 2=14.58%, 4=0.40%, 10=0.13%, 20=0.03% 00:14:52.943 cpu : usr=60.44%, sys=26.16%, ctx=6766, majf=0, minf=22297 00:14:52.943 IO depths : 1=12.1%, 2=24.6%, 4=50.4%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:52.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:52.943 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:52.943 issued rwts: total=255736,258329,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:52.943 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:52.943 00:14:52.943 Run status group 0 (all jobs): 00:14:52.943 READ: bw=99.9MiB/s (105MB/s), 99.9MiB/s-99.9MiB/s (105MB/s-105MB/s), io=999MiB (1047MB), run=10001-10001msec 00:14:52.943 WRITE: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=1009MiB (1058MB), run=10001-10001msec 00:14:52.943 ----------------------------------------------------- 00:14:52.943 Suppressions used: 00:14:52.943 count bytes template 00:14:52.943 6 48 /usr/src/fio/parse.c 00:14:52.943 2415 231840 /usr/src/fio/iolog.c 00:14:52.943 1 8 libtcmalloc_minimal.so 00:14:52.943 1 904 libcrypto.so 00:14:52.943 ----------------------------------------------------- 00:14:52.943 00:14:52.943 00:14:52.943 real 0m12.348s 00:14:52.943 user 0m38.162s 00:14:52.943 sys 0m16.032s 00:14:52.943 13:10:45 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:52.943 13:10:45 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:14:52.943 ************************************ 00:14:52.943 END TEST bdev_fio_rw_verify 00:14:52.943 ************************************ 00:14:52.943 13:10:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:14:52.943 13:10:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:52.943 13:10:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:14:52.943 13:10:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:52.943 13:10:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:14:52.943 13:10:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:14:52.943 13:10:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:14:52.943 13:10:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:14:52.943 13:10:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:52.943 13:10:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:14:52.943 13:10:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:14:52.943 13:10:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:52.943 13:10:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:14:52.943 13:10:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:14:52.943 13:10:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:14:52.943 13:10:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:14:52.943 13:10:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:14:52.944 13:10:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "a59d0b0b-182e-43d9-a90f-6f4f701137b5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "a59d0b0b-182e-43d9-a90f-6f4f701137b5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "96f58e58-1724-40e9-8dd4-e1d5406cbd7a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "96f58e58-1724-40e9-8dd4-e1d5406cbd7a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "bb301f30-7232-4760-9540-78d51f16d6d1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bb301f30-7232-4760-9540-78d51f16d6d1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "f86a8fb4-3692-48de-b816-8feb19ea1eb1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f86a8fb4-3692-48de-b816-8feb19ea1eb1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "2b737d3c-57f7-45c6-9951-df5568ec4d88"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2b737d3c-57f7-45c6-9951-df5568ec4d88",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "72dfa4e0-e75b-487d-b48a-52fa3b980e4c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "72dfa4e0-e75b-487d-b48a-52fa3b980e4c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:14:52.944 13:10:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:14:52.944 13:10:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:52.944 13:10:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:14:52.944 /home/vagrant/spdk_repo/spdk 00:14:52.944 13:10:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:14:52.944 13:10:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:14:52.944 00:14:52.944 real 0m12.519s 00:14:52.944 user 0m38.266s 00:14:52.944 sys 0m16.100s 00:14:52.944 13:10:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:52.944 ************************************ 00:14:52.944 END TEST bdev_fio 00:14:52.944 ************************************ 00:14:52.944 13:10:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:53.201 13:10:45 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:53.201 13:10:45 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:53.201 13:10:45 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:14:53.201 13:10:45 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:53.201 13:10:45 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:53.201 ************************************ 00:14:53.201 START TEST bdev_verify 00:14:53.201 ************************************ 00:14:53.201 13:10:45 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:53.201 [2024-07-25 13:10:45.252409] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:53.201 [2024-07-25 13:10:45.252558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75733 ] 00:14:53.459 [2024-07-25 13:10:45.418147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:53.718 [2024-07-25 13:10:45.649390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.718 [2024-07-25 13:10:45.649406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.976 Running I/O for 5 seconds... 00:14:59.240 00:14:59.240 Latency(us) 00:14:59.240 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.240 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:59.240 Verification LBA range: start 0x0 length 0xa0000 00:14:59.240 nvme0n1 : 5.05 1597.57 6.24 0.00 0.00 79973.87 13524.25 101997.85 00:14:59.240 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:59.240 Verification LBA range: start 0xa0000 length 0xa0000 00:14:59.240 nvme0n1 : 5.04 1523.93 5.95 0.00 0.00 83831.67 15192.44 98184.84 00:14:59.240 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:59.240 Verification LBA range: start 0x0 length 0xbd0bd 00:14:59.240 nvme1n1 : 5.05 2769.79 10.82 0.00 0.00 45874.96 4915.20 77213.32 00:14:59.240 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:59.240 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:14:59.240 nvme1n1 : 5.05 2654.38 10.37 0.00 0.00 47893.20 4736.47 71493.82 00:14:59.240 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:59.240 Verification LBA range: start 0x0 length 0x80000 00:14:59.240 nvme2n1 : 5.06 1619.02 6.32 0.00 0.00 78483.50 8340.95 85792.58 00:14:59.240 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:59.240 Verification LBA range: start 0x80000 length 0x80000 00:14:59.240 nvme2n1 : 5.04 1548.39 6.05 0.00 0.00 82128.93 13583.83 82456.20 00:14:59.240 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:59.240 Verification LBA range: start 0x0 length 0x80000 00:14:59.240 nvme2n2 : 5.06 1617.97 6.32 0.00 0.00 78377.86 6702.55 105334.23 00:14:59.240 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:59.240 Verification LBA range: start 0x80000 length 0x80000 00:14:59.240 nvme2n2 : 5.06 1567.23 6.12 0.00 0.00 80982.82 5093.93 87222.46 00:14:59.240 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:59.240 Verification LBA range: start 0x0 length 0x80000 00:14:59.240 nvme2n3 : 5.07 1617.07 6.32 0.00 0.00 78272.02 8638.84 109147.23 00:14:59.240 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:59.240 Verification LBA range: start 0x80000 length 0x80000 00:14:59.240 nvme2n3 : 5.05 1545.07 6.04 0.00 0.00 81974.92 13107.20 88175.71 00:14:59.240 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:59.240 Verification LBA range: start 0x0 length 0x20000 00:14:59.240 nvme3n1 : 5.07 1616.24 6.31 0.00 0.00 78163.25 6285.50 110577.11 00:14:59.240 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:59.240 Verification LBA range: start 0x20000 length 0x20000 00:14:59.240 nvme3n1 : 5.06 1542.58 6.03 0.00 0.00 81941.71 5659.93 109147.23 00:14:59.240 =================================================================================================================== 00:14:59.240 Total : 21219.25 82.89 0.00 0.00 71810.98 4736.47 110577.11 00:15:00.172 00:15:00.172 real 0m7.184s 00:15:00.172 user 0m11.188s 00:15:00.172 sys 0m1.769s 00:15:00.172 13:10:52 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:00.172 13:10:52 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:15:00.172 ************************************ 00:15:00.172 END TEST bdev_verify 00:15:00.172 ************************************ 00:15:00.430 13:10:52 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:00.430 13:10:52 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:15:00.430 13:10:52 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:00.430 13:10:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:00.430 ************************************ 00:15:00.430 START TEST bdev_verify_big_io 00:15:00.430 ************************************ 00:15:00.430 13:10:52 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:00.430 [2024-07-25 13:10:52.472526] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:00.430 [2024-07-25 13:10:52.472670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75837 ] 00:15:00.700 [2024-07-25 13:10:52.636269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:00.700 [2024-07-25 13:10:52.830979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.700 [2024-07-25 13:10:52.830988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:01.266 Running I/O for 5 seconds... 00:15:07.823 00:15:07.823 Latency(us) 00:15:07.823 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.823 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:07.823 Verification LBA range: start 0x0 length 0xa000 00:15:07.823 nvme0n1 : 5.96 138.28 8.64 0.00 0.00 892718.17 50045.67 1151527.10 00:15:07.823 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:07.823 Verification LBA range: start 0xa000 length 0xa000 00:15:07.823 nvme0n1 : 6.02 123.49 7.72 0.00 0.00 998030.00 132501.88 1258291.20 00:15:07.823 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:07.823 Verification LBA range: start 0x0 length 0xbd0b 00:15:07.823 nvme1n1 : 6.02 127.58 7.97 0.00 0.00 932649.89 179211.17 964689.92 00:15:07.823 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:07.823 Verification LBA range: start 0xbd0b length 0xbd0b 00:15:07.823 nvme1n1 : 6.00 138.57 8.66 0.00 0.00 862339.62 45994.36 915120.87 00:15:07.823 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:07.823 Verification LBA range: start 0x0 length 0x8000 00:15:07.823 nvme2n1 : 5.96 107.31 6.71 0.00 0.00 1076105.22 95801.72 1708225.63 00:15:07.823 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:07.823 Verification LBA range: start 0x8000 length 0x8000 00:15:07.823 nvme2n1 : 6.01 133.18 8.32 0.00 0.00 867081.20 180164.42 937998.89 00:15:07.823 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:07.823 Verification LBA range: start 0x0 length 0x8000 00:15:07.823 nvme2n2 : 6.02 131.50 8.22 0.00 0.00 856629.54 53858.68 1265917.21 00:15:07.823 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:07.823 Verification LBA range: start 0x8000 length 0x8000 00:15:07.823 nvme2n2 : 6.03 120.75 7.55 0.00 0.00 941432.50 16443.58 2211542.11 00:15:07.823 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:07.823 Verification LBA range: start 0x0 length 0x8000 00:15:07.823 nvme2n3 : 6.03 100.90 6.31 0.00 0.00 1074671.54 20018.27 1860745.77 00:15:07.823 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:07.823 Verification LBA range: start 0x8000 length 0x8000 00:15:07.823 nvme2n3 : 6.03 106.10 6.63 0.00 0.00 1034973.51 14834.97 1731103.65 00:15:07.823 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:07.823 Verification LBA range: start 0x0 length 0x2000 00:15:07.823 nvme3n1 : 6.07 102.85 6.43 0.00 0.00 1023576.82 4230.05 3248679.10 00:15:07.823 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:07.823 Verification LBA range: start 0x2000 length 0x2000 00:15:07.824 nvme3n1 : 6.04 92.79 5.80 0.00 0.00 1142467.81 12451.84 3248679.10 00:15:07.824 =================================================================================================================== 00:15:07.824 Total : 1423.30 88.96 0.00 0.00 964058.27 4230.05 3248679.10 00:15:08.759 00:15:08.759 real 0m8.418s 00:15:08.759 user 0m15.176s 00:15:08.759 sys 0m0.490s 00:15:08.759 13:11:00 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:08.759 13:11:00 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.759 ************************************ 00:15:08.759 END TEST bdev_verify_big_io 00:15:08.759 ************************************ 00:15:08.759 13:11:00 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:08.759 13:11:00 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:15:08.759 13:11:00 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:08.759 13:11:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:08.759 ************************************ 00:15:08.759 START TEST bdev_write_zeroes 00:15:08.759 ************************************ 00:15:08.759 13:11:00 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:09.017 [2024-07-25 13:11:00.976227] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:09.017 [2024-07-25 13:11:00.976373] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75949 ] 00:15:09.017 [2024-07-25 13:11:01.139634] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.275 [2024-07-25 13:11:01.325537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.842 Running I/O for 1 seconds... 00:15:10.777 00:15:10.777 Latency(us) 00:15:10.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.777 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:10.777 nvme0n1 : 1.00 11094.17 43.34 0.00 0.00 11524.86 7506.85 17754.30 00:15:10.777 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:10.777 nvme1n1 : 1.01 14833.96 57.95 0.00 0.00 8597.11 3470.43 14239.19 00:15:10.777 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:10.777 nvme2n1 : 1.02 11092.70 43.33 0.00 0.00 11466.09 7387.69 17992.61 00:15:10.777 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:10.777 nvme2n2 : 1.02 11076.53 43.27 0.00 0.00 11473.59 7387.69 17992.61 00:15:10.777 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:10.777 nvme2n3 : 1.02 11059.71 43.20 0.00 0.00 11481.97 7477.06 17992.61 00:15:10.777 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:10.777 nvme3n1 : 1.02 11043.42 43.14 0.00 0.00 11492.06 7477.06 17992.61 00:15:10.777 =================================================================================================================== 00:15:10.777 Total : 70200.48 274.22 0.00 0.00 10877.32 3470.43 17992.61 00:15:11.712 00:15:11.712 real 0m3.045s 00:15:11.712 user 0m2.300s 00:15:11.712 sys 0m0.567s 00:15:11.712 13:11:03 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:11.712 13:11:03 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:15:11.712 ************************************ 00:15:11.712 END TEST bdev_write_zeroes 00:15:11.712 ************************************ 00:15:11.970 13:11:03 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:11.970 13:11:03 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:15:11.970 13:11:03 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:11.970 13:11:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:11.970 ************************************ 00:15:11.970 START TEST bdev_json_nonenclosed 00:15:11.970 ************************************ 00:15:11.970 13:11:03 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:11.970 [2024-07-25 13:11:04.020875] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:11.970 [2024-07-25 13:11:04.021028] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76007 ] 00:15:12.228 [2024-07-25 13:11:04.185828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.228 [2024-07-25 13:11:04.370071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.228 [2024-07-25 13:11:04.370214] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:12.228 [2024-07-25 13:11:04.370247] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:12.228 [2024-07-25 13:11:04.370265] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:12.793 00:15:12.793 real 0m0.846s 00:15:12.793 user 0m0.631s 00:15:12.793 sys 0m0.109s 00:15:12.793 13:11:04 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:12.793 ************************************ 00:15:12.793 END TEST bdev_json_nonenclosed 00:15:12.793 ************************************ 00:15:12.793 13:11:04 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:15:12.793 13:11:04 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:12.793 13:11:04 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:15:12.793 13:11:04 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:12.793 13:11:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:12.793 ************************************ 00:15:12.793 START TEST bdev_json_nonarray 00:15:12.793 ************************************ 00:15:12.793 13:11:04 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:12.793 [2024-07-25 13:11:04.914868] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:12.793 [2024-07-25 13:11:04.915024] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76034 ] 00:15:13.049 [2024-07-25 13:11:05.078071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.306 [2024-07-25 13:11:05.262331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.306 [2024-07-25 13:11:05.262437] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:13.306 [2024-07-25 13:11:05.262471] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:13.306 [2024-07-25 13:11:05.262489] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:13.563 00:15:13.563 real 0m0.865s 00:15:13.563 user 0m0.635s 00:15:13.563 sys 0m0.124s 00:15:13.563 13:11:05 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:13.563 ************************************ 00:15:13.563 END TEST bdev_json_nonarray 00:15:13.563 13:11:05 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:15:13.563 ************************************ 00:15:13.563 13:11:05 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:15:13.563 13:11:05 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:15:13.563 13:11:05 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:15:13.563 13:11:05 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:15:13.563 13:11:05 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:15:13.563 13:11:05 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:15:13.563 13:11:05 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:13.563 13:11:05 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:15:13.563 13:11:05 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:15:13.563 13:11:05 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:15:13.563 13:11:05 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:15:13.563 13:11:05 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:14.127 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:15.500 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:15.500 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:15.500 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:15.500 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:15.500 00:15:15.500 real 1m2.735s 00:15:15.500 user 1m45.678s 00:15:15.500 sys 0m27.930s 00:15:15.500 13:11:07 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:15.500 13:11:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:15.500 ************************************ 00:15:15.500 END TEST blockdev_xnvme 00:15:15.500 ************************************ 00:15:15.500 13:11:07 -- spdk/autotest.sh@255 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:15.500 13:11:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:15.500 13:11:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:15.500 13:11:07 -- common/autotest_common.sh@10 -- # set +x 00:15:15.500 ************************************ 00:15:15.500 START TEST ublk 00:15:15.500 ************************************ 00:15:15.500 13:11:07 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:15.758 * Looking for test storage... 00:15:15.758 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:15.758 13:11:07 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:15.758 13:11:07 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:15.758 13:11:07 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:15.758 13:11:07 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:15.758 13:11:07 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:15.758 13:11:07 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:15.758 13:11:07 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:15.758 13:11:07 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:15.758 13:11:07 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:15.758 13:11:07 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:15:15.758 13:11:07 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:15:15.758 13:11:07 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:15:15.758 13:11:07 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:15:15.758 13:11:07 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:15:15.758 13:11:07 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:15:15.758 13:11:07 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:15:15.758 13:11:07 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:15:15.758 13:11:07 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:15:15.758 13:11:07 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:15:15.758 13:11:07 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:15:15.758 13:11:07 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:15.758 13:11:07 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:15.758 13:11:07 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:15.758 ************************************ 00:15:15.758 START TEST test_save_ublk_config 00:15:15.758 ************************************ 00:15:15.758 13:11:07 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:15:15.758 13:11:07 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:15:15.758 13:11:07 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=76323 00:15:15.758 13:11:07 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:15:15.758 13:11:07 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:15:15.758 13:11:07 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 76323 00:15:15.759 13:11:07 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 76323 ']' 00:15:15.759 13:11:07 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.759 13:11:07 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:15.759 13:11:07 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.759 13:11:07 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:15.759 13:11:07 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:15.759 [2024-07-25 13:11:07.900830] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:15.759 [2024-07-25 13:11:07.901026] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76323 ] 00:15:16.016 [2024-07-25 13:11:08.073027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.273 [2024-07-25 13:11:08.283269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.839 13:11:09 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:16.839 13:11:09 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:15:16.839 13:11:09 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:15:16.839 13:11:09 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:15:16.839 13:11:09 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.839 13:11:09 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:17.097 [2024-07-25 13:11:09.034145] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:17.097 [2024-07-25 13:11:09.035240] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:17.097 malloc0 00:15:17.097 [2024-07-25 13:11:09.106543] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:17.097 [2024-07-25 13:11:09.106677] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:17.097 [2024-07-25 13:11:09.106694] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:17.097 [2024-07-25 13:11:09.106707] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:17.097 [2024-07-25 13:11:09.115222] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:17.097 [2024-07-25 13:11:09.115270] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:17.097 [2024-07-25 13:11:09.122151] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:17.097 [2024-07-25 13:11:09.122292] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:17.097 [2024-07-25 13:11:09.139136] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:17.097 0 00:15:17.097 13:11:09 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.097 13:11:09 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:15:17.097 13:11:09 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.097 13:11:09 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:17.354 13:11:09 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.354 13:11:09 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:15:17.354 "subsystems": [ 00:15:17.354 { 00:15:17.354 "subsystem": "keyring", 00:15:17.354 "config": [] 00:15:17.354 }, 00:15:17.354 { 00:15:17.354 "subsystem": "iobuf", 00:15:17.354 "config": [ 00:15:17.354 { 00:15:17.354 "method": "iobuf_set_options", 00:15:17.354 "params": { 00:15:17.354 "small_pool_count": 8192, 00:15:17.354 "large_pool_count": 1024, 00:15:17.354 "small_bufsize": 8192, 00:15:17.354 "large_bufsize": 135168 00:15:17.354 } 00:15:17.354 } 00:15:17.354 ] 00:15:17.354 }, 00:15:17.354 { 00:15:17.354 "subsystem": "sock", 00:15:17.354 "config": [ 00:15:17.354 { 00:15:17.354 "method": "sock_set_default_impl", 00:15:17.354 "params": { 00:15:17.354 "impl_name": "posix" 00:15:17.354 } 00:15:17.354 }, 00:15:17.354 { 00:15:17.354 "method": "sock_impl_set_options", 00:15:17.354 "params": { 00:15:17.354 "impl_name": "ssl", 00:15:17.354 "recv_buf_size": 4096, 00:15:17.354 "send_buf_size": 4096, 00:15:17.354 "enable_recv_pipe": true, 00:15:17.354 "enable_quickack": false, 00:15:17.354 "enable_placement_id": 0, 00:15:17.354 "enable_zerocopy_send_server": true, 00:15:17.354 "enable_zerocopy_send_client": false, 00:15:17.354 "zerocopy_threshold": 0, 00:15:17.354 "tls_version": 0, 00:15:17.354 "enable_ktls": false 00:15:17.354 } 00:15:17.354 }, 00:15:17.354 { 00:15:17.354 "method": "sock_impl_set_options", 00:15:17.354 "params": { 00:15:17.354 "impl_name": "posix", 00:15:17.354 "recv_buf_size": 2097152, 00:15:17.354 "send_buf_size": 2097152, 00:15:17.354 "enable_recv_pipe": true, 00:15:17.354 "enable_quickack": false, 00:15:17.354 "enable_placement_id": 0, 00:15:17.354 "enable_zerocopy_send_server": true, 00:15:17.354 "enable_zerocopy_send_client": false, 00:15:17.354 "zerocopy_threshold": 0, 00:15:17.354 "tls_version": 0, 00:15:17.354 "enable_ktls": false 00:15:17.354 } 00:15:17.354 } 00:15:17.354 ] 00:15:17.354 }, 00:15:17.354 { 00:15:17.354 "subsystem": "vmd", 00:15:17.354 "config": [] 00:15:17.354 }, 00:15:17.354 { 00:15:17.354 "subsystem": "accel", 00:15:17.354 "config": [ 00:15:17.354 { 00:15:17.354 "method": "accel_set_options", 00:15:17.354 "params": { 00:15:17.354 "small_cache_size": 128, 00:15:17.354 "large_cache_size": 16, 00:15:17.354 "task_count": 2048, 00:15:17.354 "sequence_count": 2048, 00:15:17.354 "buf_count": 2048 00:15:17.354 } 00:15:17.354 } 00:15:17.354 ] 00:15:17.354 }, 00:15:17.354 { 00:15:17.354 "subsystem": "bdev", 00:15:17.354 "config": [ 00:15:17.354 { 00:15:17.354 "method": "bdev_set_options", 00:15:17.354 "params": { 00:15:17.354 "bdev_io_pool_size": 65535, 00:15:17.354 "bdev_io_cache_size": 256, 00:15:17.354 "bdev_auto_examine": true, 00:15:17.354 "iobuf_small_cache_size": 128, 00:15:17.354 "iobuf_large_cache_size": 16 00:15:17.354 } 00:15:17.354 }, 00:15:17.354 { 00:15:17.354 "method": "bdev_raid_set_options", 00:15:17.354 "params": { 00:15:17.354 "process_window_size_kb": 1024, 00:15:17.354 "process_max_bandwidth_mb_sec": 0 00:15:17.354 } 00:15:17.354 }, 00:15:17.354 { 00:15:17.354 "method": "bdev_iscsi_set_options", 00:15:17.354 "params": { 00:15:17.354 "timeout_sec": 30 00:15:17.354 } 00:15:17.354 }, 00:15:17.354 { 00:15:17.354 "method": "bdev_nvme_set_options", 00:15:17.354 "params": { 00:15:17.354 "action_on_timeout": "none", 00:15:17.354 "timeout_us": 0, 00:15:17.354 "timeout_admin_us": 0, 00:15:17.354 "keep_alive_timeout_ms": 10000, 00:15:17.354 "arbitration_burst": 0, 00:15:17.354 "low_priority_weight": 0, 00:15:17.354 "medium_priority_weight": 0, 00:15:17.354 "high_priority_weight": 0, 00:15:17.354 "nvme_adminq_poll_period_us": 10000, 00:15:17.355 "nvme_ioq_poll_period_us": 0, 00:15:17.355 "io_queue_requests": 0, 00:15:17.355 "delay_cmd_submit": true, 00:15:17.355 "transport_retry_count": 4, 00:15:17.355 "bdev_retry_count": 3, 00:15:17.355 "transport_ack_timeout": 0, 00:15:17.355 "ctrlr_loss_timeout_sec": 0, 00:15:17.355 "reconnect_delay_sec": 0, 00:15:17.355 "fast_io_fail_timeout_sec": 0, 00:15:17.355 "disable_auto_failback": false, 00:15:17.355 "generate_uuids": false, 00:15:17.355 "transport_tos": 0, 00:15:17.355 "nvme_error_stat": false, 00:15:17.355 "rdma_srq_size": 0, 00:15:17.355 "io_path_stat": false, 00:15:17.355 "allow_accel_sequence": false, 00:15:17.355 "rdma_max_cq_size": 0, 00:15:17.355 "rdma_cm_event_timeout_ms": 0, 00:15:17.355 "dhchap_digests": [ 00:15:17.355 "sha256", 00:15:17.355 "sha384", 00:15:17.355 "sha512" 00:15:17.355 ], 00:15:17.355 "dhchap_dhgroups": [ 00:15:17.355 "null", 00:15:17.355 "ffdhe2048", 00:15:17.355 "ffdhe3072", 00:15:17.355 "ffdhe4096", 00:15:17.355 "ffdhe6144", 00:15:17.355 "ffdhe8192" 00:15:17.355 ] 00:15:17.355 } 00:15:17.355 }, 00:15:17.355 { 00:15:17.355 "method": "bdev_nvme_set_hotplug", 00:15:17.355 "params": { 00:15:17.355 "period_us": 100000, 00:15:17.355 "enable": false 00:15:17.355 } 00:15:17.355 }, 00:15:17.355 { 00:15:17.355 "method": "bdev_malloc_create", 00:15:17.355 "params": { 00:15:17.355 "name": "malloc0", 00:15:17.355 "num_blocks": 8192, 00:15:17.355 "block_size": 4096, 00:15:17.355 "physical_block_size": 4096, 00:15:17.355 "uuid": "14d1d68e-8053-46d9-a78a-2088338d8401", 00:15:17.355 "optimal_io_boundary": 0, 00:15:17.355 "md_size": 0, 00:15:17.355 "dif_type": 0, 00:15:17.355 "dif_is_head_of_md": false, 00:15:17.355 "dif_pi_format": 0 00:15:17.355 } 00:15:17.355 }, 00:15:17.355 { 00:15:17.355 "method": "bdev_wait_for_examine" 00:15:17.355 } 00:15:17.355 ] 00:15:17.355 }, 00:15:17.355 { 00:15:17.355 "subsystem": "scsi", 00:15:17.355 "config": null 00:15:17.355 }, 00:15:17.355 { 00:15:17.355 "subsystem": "scheduler", 00:15:17.355 "config": [ 00:15:17.355 { 00:15:17.355 "method": "framework_set_scheduler", 00:15:17.355 "params": { 00:15:17.355 "name": "static" 00:15:17.355 } 00:15:17.355 } 00:15:17.355 ] 00:15:17.355 }, 00:15:17.355 { 00:15:17.355 "subsystem": "vhost_scsi", 00:15:17.355 "config": [] 00:15:17.355 }, 00:15:17.355 { 00:15:17.355 "subsystem": "vhost_blk", 00:15:17.355 "config": [] 00:15:17.355 }, 00:15:17.355 { 00:15:17.355 "subsystem": "ublk", 00:15:17.355 "config": [ 00:15:17.355 { 00:15:17.355 "method": "ublk_create_target", 00:15:17.355 "params": { 00:15:17.355 "cpumask": "1" 00:15:17.355 } 00:15:17.355 }, 00:15:17.355 { 00:15:17.355 "method": "ublk_start_disk", 00:15:17.355 "params": { 00:15:17.355 "bdev_name": "malloc0", 00:15:17.355 "ublk_id": 0, 00:15:17.355 "num_queues": 1, 00:15:17.355 "queue_depth": 128 00:15:17.355 } 00:15:17.355 } 00:15:17.355 ] 00:15:17.355 }, 00:15:17.355 { 00:15:17.355 "subsystem": "nbd", 00:15:17.355 "config": [] 00:15:17.355 }, 00:15:17.355 { 00:15:17.355 "subsystem": "nvmf", 00:15:17.355 "config": [ 00:15:17.355 { 00:15:17.355 "method": "nvmf_set_config", 00:15:17.355 "params": { 00:15:17.355 "discovery_filter": "match_any", 00:15:17.355 "admin_cmd_passthru": { 00:15:17.355 "identify_ctrlr": false 00:15:17.355 } 00:15:17.355 } 00:15:17.355 }, 00:15:17.355 { 00:15:17.355 "method": "nvmf_set_max_subsystems", 00:15:17.355 "params": { 00:15:17.355 "max_subsystems": 1024 00:15:17.355 } 00:15:17.355 }, 00:15:17.355 { 00:15:17.355 "method": "nvmf_set_crdt", 00:15:17.355 "params": { 00:15:17.355 "crdt1": 0, 00:15:17.355 "crdt2": 0, 00:15:17.355 "crdt3": 0 00:15:17.355 } 00:15:17.355 } 00:15:17.355 ] 00:15:17.355 }, 00:15:17.355 { 00:15:17.355 "subsystem": "iscsi", 00:15:17.355 "config": [ 00:15:17.355 { 00:15:17.355 "method": "iscsi_set_options", 00:15:17.355 "params": { 00:15:17.355 "node_base": "iqn.2016-06.io.spdk", 00:15:17.355 "max_sessions": 128, 00:15:17.355 "max_connections_per_session": 2, 00:15:17.355 "max_queue_depth": 64, 00:15:17.355 "default_time2wait": 2, 00:15:17.355 "default_time2retain": 20, 00:15:17.355 "first_burst_length": 8192, 00:15:17.355 "immediate_data": true, 00:15:17.355 "allow_duplicated_isid": false, 00:15:17.355 "error_recovery_level": 0, 00:15:17.355 "nop_timeout": 60, 00:15:17.355 "nop_in_interval": 30, 00:15:17.355 "disable_chap": false, 00:15:17.355 "require_chap": false, 00:15:17.355 "mutual_chap": false, 00:15:17.355 "chap_group": 0, 00:15:17.355 "max_large_datain_per_connection": 64, 00:15:17.355 "max_r2t_per_connection": 4, 00:15:17.355 "pdu_pool_size": 36864, 00:15:17.355 "immediate_data_pool_size": 16384, 00:15:17.355 "data_out_pool_size": 2048 00:15:17.355 } 00:15:17.355 } 00:15:17.355 ] 00:15:17.355 } 00:15:17.355 ] 00:15:17.355 }' 00:15:17.355 13:11:09 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 76323 00:15:17.355 13:11:09 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 76323 ']' 00:15:17.355 13:11:09 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 76323 00:15:17.355 13:11:09 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:15:17.355 13:11:09 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:17.355 13:11:09 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76323 00:15:17.355 13:11:09 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:17.355 killing process with pid 76323 00:15:17.355 13:11:09 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:17.355 13:11:09 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76323' 00:15:17.355 13:11:09 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 76323 00:15:17.355 13:11:09 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 76323 00:15:18.730 [2024-07-25 13:11:10.743891] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:18.730 [2024-07-25 13:11:10.783181] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:18.730 [2024-07-25 13:11:10.783403] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:18.730 [2024-07-25 13:11:10.791162] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:18.730 [2024-07-25 13:11:10.791244] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:18.730 [2024-07-25 13:11:10.791259] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:18.730 [2024-07-25 13:11:10.791300] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:15:18.730 [2024-07-25 13:11:10.791497] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:15:20.104 13:11:12 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=76378 00:15:20.104 13:11:12 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 76378 00:15:20.104 13:11:12 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 76378 ']' 00:15:20.104 13:11:12 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.104 13:11:12 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:20.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.105 13:11:12 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.105 13:11:12 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:20.105 13:11:12 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:20.105 13:11:12 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:15:20.105 13:11:12 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:15:20.105 "subsystems": [ 00:15:20.105 { 00:15:20.105 "subsystem": "keyring", 00:15:20.105 "config": [] 00:15:20.105 }, 00:15:20.105 { 00:15:20.105 "subsystem": "iobuf", 00:15:20.105 "config": [ 00:15:20.105 { 00:15:20.105 "method": "iobuf_set_options", 00:15:20.105 "params": { 00:15:20.105 "small_pool_count": 8192, 00:15:20.105 "large_pool_count": 1024, 00:15:20.105 "small_bufsize": 8192, 00:15:20.105 "large_bufsize": 135168 00:15:20.105 } 00:15:20.105 } 00:15:20.105 ] 00:15:20.105 }, 00:15:20.105 { 00:15:20.105 "subsystem": "sock", 00:15:20.105 "config": [ 00:15:20.105 { 00:15:20.105 "method": "sock_set_default_impl", 00:15:20.105 "params": { 00:15:20.105 "impl_name": "posix" 00:15:20.105 } 00:15:20.105 }, 00:15:20.105 { 00:15:20.105 "method": "sock_impl_set_options", 00:15:20.105 "params": { 00:15:20.105 "impl_name": "ssl", 00:15:20.105 "recv_buf_size": 4096, 00:15:20.105 "send_buf_size": 4096, 00:15:20.105 "enable_recv_pipe": true, 00:15:20.105 "enable_quickack": false, 00:15:20.105 "enable_placement_id": 0, 00:15:20.105 "enable_zerocopy_send_server": true, 00:15:20.105 "enable_zerocopy_send_client": false, 00:15:20.105 "zerocopy_threshold": 0, 00:15:20.105 "tls_version": 0, 00:15:20.105 "enable_ktls": false 00:15:20.105 } 00:15:20.105 }, 00:15:20.105 { 00:15:20.105 "method": "sock_impl_set_options", 00:15:20.105 "params": { 00:15:20.105 "impl_name": "posix", 00:15:20.105 "recv_buf_size": 2097152, 00:15:20.105 "send_buf_size": 2097152, 00:15:20.105 "enable_recv_pipe": true, 00:15:20.105 "enable_quickack": false, 00:15:20.105 "enable_placement_id": 0, 00:15:20.105 "enable_zerocopy_send_server": true, 00:15:20.105 "enable_zerocopy_send_client": false, 00:15:20.105 "zerocopy_threshold": 0, 00:15:20.105 "tls_version": 0, 00:15:20.105 "enable_ktls": false 00:15:20.105 } 00:15:20.105 } 00:15:20.105 ] 00:15:20.105 }, 00:15:20.105 { 00:15:20.105 "subsystem": "vmd", 00:15:20.105 "config": [] 00:15:20.105 }, 00:15:20.105 { 00:15:20.105 "subsystem": "accel", 00:15:20.105 "config": [ 00:15:20.105 { 00:15:20.105 "method": "accel_set_options", 00:15:20.105 "params": { 00:15:20.105 "small_cache_size": 128, 00:15:20.105 "large_cache_size": 16, 00:15:20.105 "task_count": 2048, 00:15:20.105 "sequence_count": 2048, 00:15:20.105 "buf_count": 2048 00:15:20.105 } 00:15:20.105 } 00:15:20.105 ] 00:15:20.105 }, 00:15:20.105 { 00:15:20.105 "subsystem": "bdev", 00:15:20.105 "config": [ 00:15:20.105 { 00:15:20.105 "method": "bdev_set_options", 00:15:20.105 "params": { 00:15:20.105 "bdev_io_pool_size": 65535, 00:15:20.105 "bdev_io_cache_size": 256, 00:15:20.105 "bdev_auto_examine": true, 00:15:20.105 "iobuf_small_cache_size": 128, 00:15:20.105 "iobuf_large_cache_size": 16 00:15:20.105 } 00:15:20.105 }, 00:15:20.105 { 00:15:20.105 "method": "bdev_raid_set_options", 00:15:20.105 "params": { 00:15:20.105 "process_window_size_kb": 1024, 00:15:20.105 "process_max_bandwidth_mb_sec": 0 00:15:20.105 } 00:15:20.105 }, 00:15:20.105 { 00:15:20.105 "method": "bdev_iscsi_set_options", 00:15:20.105 "params": { 00:15:20.105 "timeout_sec": 30 00:15:20.105 } 00:15:20.105 }, 00:15:20.105 { 00:15:20.105 "method": "bdev_nvme_set_options", 00:15:20.105 "params": { 00:15:20.105 "action_on_timeout": "none", 00:15:20.105 "timeout_us": 0, 00:15:20.105 "timeout_admin_us": 0, 00:15:20.105 "keep_alive_timeout_ms": 10000, 00:15:20.105 "arbitration_burst": 0, 00:15:20.105 "low_priority_weight": 0, 00:15:20.105 "medium_priority_weight": 0, 00:15:20.105 "high_priority_weight": 0, 00:15:20.105 "nvme_adminq_poll_period_us": 10000, 00:15:20.105 "nvme_ioq_poll_period_us": 0, 00:15:20.105 "io_queue_requests": 0, 00:15:20.105 "delay_cmd_submit": true, 00:15:20.105 "transport_retry_count": 4, 00:15:20.105 "bdev_retry_count": 3, 00:15:20.105 "transport_ack_timeout": 0, 00:15:20.105 "ctrlr_loss_timeout_sec": 0, 00:15:20.105 "reconnect_delay_sec": 0, 00:15:20.105 "fast_io_fail_timeout_sec": 0, 00:15:20.105 "disable_auto_failback": false, 00:15:20.105 "generate_uuids": false, 00:15:20.105 "transport_tos": 0, 00:15:20.105 "nvme_error_stat": false, 00:15:20.105 "rdma_srq_size": 0, 00:15:20.105 "io_path_stat": false, 00:15:20.105 "allow_accel_sequence": false, 00:15:20.105 "rdma_max_cq_size": 0, 00:15:20.105 "rdma_cm_event_timeout_ms": 0, 00:15:20.105 "dhchap_digests": [ 00:15:20.105 "sha256", 00:15:20.105 "sha384", 00:15:20.105 "sha512" 00:15:20.105 ], 00:15:20.105 "dhchap_dhgroups": [ 00:15:20.105 "null", 00:15:20.105 "ffdhe2048", 00:15:20.105 "ffdhe3072", 00:15:20.105 "ffdhe4096", 00:15:20.105 "ffdhe6144", 00:15:20.105 "ffdhe8192" 00:15:20.105 ] 00:15:20.105 } 00:15:20.105 }, 00:15:20.105 { 00:15:20.105 "method": "bdev_nvme_set_hotplug", 00:15:20.105 "params": { 00:15:20.105 "period_us": 100000, 00:15:20.105 "enable": false 00:15:20.105 } 00:15:20.105 }, 00:15:20.105 { 00:15:20.105 "method": "bdev_malloc_create", 00:15:20.105 "params": { 00:15:20.105 "name": "malloc0", 00:15:20.105 "num_blocks": 8192, 00:15:20.105 "block_size": 4096, 00:15:20.105 "physical_block_size": 4096, 00:15:20.105 "uuid": "14d1d68e-8053-46d9-a78a-2088338d8401", 00:15:20.105 "optimal_io_boundary": 0, 00:15:20.105 "md_size": 0, 00:15:20.105 "dif_type": 0, 00:15:20.105 "dif_is_head_of_md": false, 00:15:20.105 "dif_pi_format": 0 00:15:20.105 } 00:15:20.105 }, 00:15:20.105 { 00:15:20.105 "method": "bdev_wait_for_examine" 00:15:20.105 } 00:15:20.105 ] 00:15:20.105 }, 00:15:20.105 { 00:15:20.105 "subsystem": "scsi", 00:15:20.105 "config": null 00:15:20.105 }, 00:15:20.105 { 00:15:20.105 "subsystem": "scheduler", 00:15:20.105 "config": [ 00:15:20.105 { 00:15:20.105 "method": "framework_set_scheduler", 00:15:20.105 "params": { 00:15:20.105 "name": "static" 00:15:20.105 } 00:15:20.105 } 00:15:20.105 ] 00:15:20.105 }, 00:15:20.105 { 00:15:20.105 "subsystem": "vhost_scsi", 00:15:20.105 "config": [] 00:15:20.105 }, 00:15:20.105 { 00:15:20.105 "subsystem": "vhost_blk", 00:15:20.105 "config": [] 00:15:20.105 }, 00:15:20.105 { 00:15:20.105 "subsystem": "ublk", 00:15:20.105 "config": [ 00:15:20.105 { 00:15:20.105 "method": "ublk_create_target", 00:15:20.105 "params": { 00:15:20.105 "cpumask": "1" 00:15:20.105 } 00:15:20.105 }, 00:15:20.105 { 00:15:20.105 "method": "ublk_start_disk", 00:15:20.105 "params": { 00:15:20.105 "bdev_name": "malloc0", 00:15:20.105 "ublk_id": 0, 00:15:20.105 "num_queues": 1, 00:15:20.105 "queue_depth": 128 00:15:20.105 } 00:15:20.105 } 00:15:20.105 ] 00:15:20.105 }, 00:15:20.105 { 00:15:20.105 "subsystem": "nbd", 00:15:20.105 "config": [] 00:15:20.105 }, 00:15:20.105 { 00:15:20.105 "subsystem": "nvmf", 00:15:20.105 "config": [ 00:15:20.105 { 00:15:20.105 "method": "nvmf_set_config", 00:15:20.105 "params": { 00:15:20.105 "discovery_filter": "match_any", 00:15:20.105 "admin_cmd_passthru": { 00:15:20.105 "identify_ctrlr": false 00:15:20.105 } 00:15:20.105 } 00:15:20.105 }, 00:15:20.105 { 00:15:20.105 "method": "nvmf_set_max_subsystems", 00:15:20.105 "params": { 00:15:20.105 "max_subsystems": 1024 00:15:20.105 } 00:15:20.105 }, 00:15:20.105 { 00:15:20.105 "method": "nvmf_set_crdt", 00:15:20.105 "params": { 00:15:20.105 "crdt1": 0, 00:15:20.105 "crdt2": 0, 00:15:20.105 "crdt3": 0 00:15:20.105 } 00:15:20.105 } 00:15:20.105 ] 00:15:20.105 }, 00:15:20.105 { 00:15:20.105 "subsystem": "iscsi", 00:15:20.105 "config": [ 00:15:20.105 { 00:15:20.105 "method": "iscsi_set_options", 00:15:20.105 "params": { 00:15:20.105 "node_base": "iqn.2016-06.io.spdk", 00:15:20.105 "max_sessions": 128, 00:15:20.105 "max_connections_per_session": 2, 00:15:20.105 "max_queue_depth": 64, 00:15:20.105 "default_time2wait": 2, 00:15:20.105 "default_time2retain": 20, 00:15:20.105 "first_burst_length": 8192, 00:15:20.106 "immediate_data": true, 00:15:20.106 "allow_duplicated_isid": false, 00:15:20.106 "error_recovery_level": 0, 00:15:20.106 "nop_timeout": 60, 00:15:20.106 "nop_in_interval": 30, 00:15:20.106 "disable_chap": false, 00:15:20.106 "require_chap": false, 00:15:20.106 "mutual_chap": false, 00:15:20.106 "chap_group": 0, 00:15:20.106 "max_large_datain_per_connection": 64, 00:15:20.106 "max_r2t_per_connection": 4, 00:15:20.106 "pdu_pool_size": 36864, 00:15:20.106 "immediate_data_pool_size": 16384, 00:15:20.106 "data_out_pool_size": 2048 00:15:20.106 } 00:15:20.106 } 00:15:20.106 ] 00:15:20.106 } 00:15:20.106 ] 00:15:20.106 }' 00:15:20.106 [2024-07-25 13:11:12.136792] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:20.106 [2024-07-25 13:11:12.136972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76378 ] 00:15:20.363 [2024-07-25 13:11:12.307648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.363 [2024-07-25 13:11:12.528176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.325 [2024-07-25 13:11:13.381130] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:21.325 [2024-07-25 13:11:13.382238] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:21.325 [2024-07-25 13:11:13.389267] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:21.325 [2024-07-25 13:11:13.389364] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:21.325 [2024-07-25 13:11:13.389380] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:21.325 [2024-07-25 13:11:13.389389] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:21.325 [2024-07-25 13:11:13.397251] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:21.325 [2024-07-25 13:11:13.397276] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:21.325 [2024-07-25 13:11:13.405144] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:21.325 [2024-07-25 13:11:13.405274] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:21.325 [2024-07-25 13:11:13.422136] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:21.325 13:11:13 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:21.325 13:11:13 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:15:21.325 13:11:13 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:15:21.325 13:11:13 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.325 13:11:13 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:15:21.325 13:11:13 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:21.325 13:11:13 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.600 13:11:13 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:21.600 13:11:13 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:15:21.600 13:11:13 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 76378 00:15:21.601 13:11:13 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 76378 ']' 00:15:21.601 13:11:13 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 76378 00:15:21.601 13:11:13 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:15:21.601 13:11:13 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:21.601 13:11:13 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76378 00:15:21.601 13:11:13 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:21.601 13:11:13 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:21.601 13:11:13 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76378' 00:15:21.601 killing process with pid 76378 00:15:21.601 13:11:13 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 76378 00:15:21.601 13:11:13 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 76378 00:15:22.973 [2024-07-25 13:11:14.930479] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:22.973 [2024-07-25 13:11:14.967212] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:22.973 [2024-07-25 13:11:14.970132] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:22.973 [2024-07-25 13:11:14.975164] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:22.973 [2024-07-25 13:11:14.975230] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:22.973 [2024-07-25 13:11:14.975245] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:22.973 [2024-07-25 13:11:14.975296] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:15:22.973 [2024-07-25 13:11:14.975488] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:15:24.345 13:11:16 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:15:24.345 00:15:24.345 real 0m8.427s 00:15:24.345 user 0m7.464s 00:15:24.345 sys 0m1.912s 00:15:24.345 13:11:16 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:24.345 13:11:16 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:24.345 ************************************ 00:15:24.345 END TEST test_save_ublk_config 00:15:24.345 ************************************ 00:15:24.345 13:11:16 ublk -- ublk/ublk.sh@139 -- # spdk_pid=76456 00:15:24.345 13:11:16 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:24.345 13:11:16 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:24.345 13:11:16 ublk -- ublk/ublk.sh@141 -- # waitforlisten 76456 00:15:24.345 13:11:16 ublk -- common/autotest_common.sh@831 -- # '[' -z 76456 ']' 00:15:24.345 13:11:16 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.345 13:11:16 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:24.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.345 13:11:16 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.345 13:11:16 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:24.345 13:11:16 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:24.345 [2024-07-25 13:11:16.365841] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:24.345 [2024-07-25 13:11:16.366014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76456 ] 00:15:24.603 [2024-07-25 13:11:16.537269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:24.603 [2024-07-25 13:11:16.762947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.603 [2024-07-25 13:11:16.762950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.536 13:11:17 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:25.536 13:11:17 ublk -- common/autotest_common.sh@864 -- # return 0 00:15:25.536 13:11:17 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:15:25.536 13:11:17 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:25.536 13:11:17 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:25.536 13:11:17 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:25.536 ************************************ 00:15:25.536 START TEST test_create_ublk 00:15:25.536 ************************************ 00:15:25.536 13:11:17 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:15:25.536 13:11:17 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:15:25.536 13:11:17 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.536 13:11:17 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:25.536 [2024-07-25 13:11:17.522131] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:25.536 [2024-07-25 13:11:17.524530] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:25.536 13:11:17 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.536 13:11:17 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:15:25.536 13:11:17 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:15:25.536 13:11:17 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.536 13:11:17 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:25.794 13:11:17 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.794 13:11:17 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:15:25.794 13:11:17 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:25.794 13:11:17 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.794 13:11:17 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:25.794 [2024-07-25 13:11:17.770289] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:25.794 [2024-07-25 13:11:17.770796] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:25.794 [2024-07-25 13:11:17.770819] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:25.794 [2024-07-25 13:11:17.770833] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:25.794 [2024-07-25 13:11:17.779343] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:25.794 [2024-07-25 13:11:17.779375] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:25.794 [2024-07-25 13:11:17.786137] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:25.794 [2024-07-25 13:11:17.796368] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:25.794 [2024-07-25 13:11:17.817153] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:25.794 13:11:17 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.794 13:11:17 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:15:25.794 13:11:17 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:15:25.794 13:11:17 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:15:25.794 13:11:17 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.794 13:11:17 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:25.794 13:11:17 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.794 13:11:17 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:15:25.794 { 00:15:25.794 "ublk_device": "/dev/ublkb0", 00:15:25.794 "id": 0, 00:15:25.794 "queue_depth": 512, 00:15:25.794 "num_queues": 4, 00:15:25.794 "bdev_name": "Malloc0" 00:15:25.795 } 00:15:25.795 ]' 00:15:25.795 13:11:17 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:15:25.795 13:11:17 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:25.795 13:11:17 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:15:25.795 13:11:17 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:15:25.795 13:11:17 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:15:26.052 13:11:18 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:15:26.052 13:11:18 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:15:26.052 13:11:18 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:15:26.052 13:11:18 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:15:26.052 13:11:18 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:26.052 13:11:18 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:15:26.052 13:11:18 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:15:26.052 13:11:18 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:15:26.052 13:11:18 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:15:26.052 13:11:18 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:15:26.052 13:11:18 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:15:26.052 13:11:18 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:15:26.052 13:11:18 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:26.052 13:11:18 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:15:26.052 13:11:18 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:26.052 13:11:18 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:26.052 13:11:18 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:15:26.052 fio: verification read phase will never start because write phase uses all of runtime 00:15:26.052 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:26.052 fio-3.35 00:15:26.052 Starting 1 process 00:15:38.264 00:15:38.265 fio_test: (groupid=0, jobs=1): err= 0: pid=76505: Thu Jul 25 13:11:28 2024 00:15:38.265 write: IOPS=11.7k, BW=45.8MiB/s (48.0MB/s)(458MiB/10001msec); 0 zone resets 00:15:38.265 clat (usec): min=58, max=5434, avg=83.72, stdev=129.56 00:15:38.265 lat (usec): min=59, max=5436, avg=84.57, stdev=129.57 00:15:38.265 clat percentiles (usec): 00:15:38.265 | 1.00th=[ 62], 5.00th=[ 72], 10.00th=[ 73], 20.00th=[ 74], 00:15:38.265 | 30.00th=[ 74], 40.00th=[ 75], 50.00th=[ 76], 60.00th=[ 77], 00:15:38.265 | 70.00th=[ 78], 80.00th=[ 82], 90.00th=[ 86], 95.00th=[ 90], 00:15:38.265 | 99.00th=[ 100], 99.50th=[ 113], 99.90th=[ 2671], 99.95th=[ 3130], 00:15:38.265 | 99.99th=[ 3720] 00:15:38.265 bw ( KiB/s): min=43808, max=50728, per=100.00%, avg=46917.89, stdev=1352.10, samples=19 00:15:38.265 iops : min=10952, max=12682, avg=11729.47, stdev=338.03, samples=19 00:15:38.265 lat (usec) : 100=99.02%, 250=0.62%, 500=0.01%, 750=0.02%, 1000=0.03% 00:15:38.265 lat (msec) : 2=0.12%, 4=0.18%, 10=0.01% 00:15:38.265 cpu : usr=3.66%, sys=8.90%, ctx=117202, majf=0, minf=795 00:15:38.265 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:38.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.265 issued rwts: total=0,117201,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:38.265 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:38.265 00:15:38.265 Run status group 0 (all jobs): 00:15:38.265 WRITE: bw=45.8MiB/s (48.0MB/s), 45.8MiB/s-45.8MiB/s (48.0MB/s-48.0MB/s), io=458MiB (480MB), run=10001-10001msec 00:15:38.265 00:15:38.265 Disk stats (read/write): 00:15:38.265 ublkb0: ios=0/116005, merge=0/0, ticks=0/8694, in_queue=8695, util=99.06% 00:15:38.265 13:11:28 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:15:38.265 13:11:28 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.265 13:11:28 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:38.265 [2024-07-25 13:11:28.349352] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:38.265 [2024-07-25 13:11:28.397201] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:38.265 [2024-07-25 13:11:28.402566] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:38.265 [2024-07-25 13:11:28.410177] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:38.265 [2024-07-25 13:11:28.410556] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:38.265 [2024-07-25 13:11:28.410572] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:38.265 13:11:28 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.265 13:11:28 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:15:38.265 13:11:28 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:15:38.265 13:11:28 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:15:38.265 13:11:28 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:38.265 13:11:28 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:38.265 13:11:28 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:38.265 13:11:28 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:38.265 13:11:28 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:15:38.265 13:11:28 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.265 13:11:28 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:38.265 [2024-07-25 13:11:28.429258] ublk.c:1053:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:15:38.265 request: 00:15:38.265 { 00:15:38.265 "ublk_id": 0, 00:15:38.265 "method": "ublk_stop_disk", 00:15:38.265 "req_id": 1 00:15:38.265 } 00:15:38.265 Got JSON-RPC error response 00:15:38.265 response: 00:15:38.265 { 00:15:38.265 "code": -19, 00:15:38.265 "message": "No such device" 00:15:38.265 } 00:15:38.265 13:11:28 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:38.265 13:11:28 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:15:38.265 13:11:28 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:38.265 13:11:28 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:38.265 13:11:28 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:38.265 13:11:28 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:15:38.265 13:11:28 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.265 13:11:28 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:38.265 [2024-07-25 13:11:28.445228] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:15:38.265 [2024-07-25 13:11:28.453147] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:15:38.265 [2024-07-25 13:11:28.453195] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:38.265 13:11:28 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.265 13:11:28 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:38.265 13:11:28 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.265 13:11:28 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:38.265 13:11:28 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.265 13:11:28 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:15:38.265 13:11:28 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:38.265 13:11:28 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.265 13:11:28 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:38.265 13:11:28 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.265 13:11:28 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:38.265 13:11:28 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:15:38.265 13:11:28 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:38.265 13:11:28 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:38.265 13:11:28 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.265 13:11:28 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:38.265 13:11:28 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.265 13:11:28 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:38.265 13:11:28 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:15:38.265 13:11:28 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:38.265 00:15:38.265 real 0m11.367s 00:15:38.265 user 0m0.824s 00:15:38.265 sys 0m0.976s 00:15:38.265 13:11:28 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:38.265 13:11:28 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:38.265 ************************************ 00:15:38.265 END TEST test_create_ublk 00:15:38.265 ************************************ 00:15:38.265 13:11:28 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:15:38.265 13:11:28 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:38.265 13:11:28 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:38.265 13:11:28 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:38.265 ************************************ 00:15:38.265 START TEST test_create_multi_ublk 00:15:38.265 ************************************ 00:15:38.265 13:11:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:15:38.265 13:11:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:15:38.265 13:11:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.265 13:11:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:38.265 [2024-07-25 13:11:28.930141] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:38.265 [2024-07-25 13:11:28.932419] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:38.265 13:11:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.265 13:11:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:15:38.265 13:11:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:15:38.265 13:11:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:38.265 13:11:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:15:38.265 13:11:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.265 13:11:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:38.265 13:11:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.265 13:11:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:15:38.265 13:11:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:38.265 13:11:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.265 13:11:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:38.265 [2024-07-25 13:11:29.170306] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:38.265 [2024-07-25 13:11:29.170849] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:38.265 [2024-07-25 13:11:29.170871] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:38.265 [2024-07-25 13:11:29.170881] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:38.265 [2024-07-25 13:11:29.179344] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:38.265 [2024-07-25 13:11:29.179369] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:38.265 [2024-07-25 13:11:29.186163] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:38.265 [2024-07-25 13:11:29.186889] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:38.265 [2024-07-25 13:11:29.197216] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:38.265 13:11:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.265 13:11:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:15:38.266 13:11:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:38.266 13:11:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:15:38.266 13:11:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.266 13:11:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:38.266 13:11:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.266 13:11:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:15:38.266 13:11:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:15:38.266 13:11:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.266 13:11:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:38.266 [2024-07-25 13:11:29.453303] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:15:38.266 [2024-07-25 13:11:29.453809] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:15:38.266 [2024-07-25 13:11:29.453826] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:38.266 [2024-07-25 13:11:29.453838] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:38.266 [2024-07-25 13:11:29.462411] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:38.266 [2024-07-25 13:11:29.462444] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:38.266 [2024-07-25 13:11:29.469144] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:38.266 [2024-07-25 13:11:29.469886] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:38.266 [2024-07-25 13:11:29.478183] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:38.266 13:11:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.266 13:11:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:15:38.266 13:11:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:38.266 13:11:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:15:38.266 13:11:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.266 13:11:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:38.266 13:11:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.266 13:11:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:15:38.266 13:11:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:15:38.266 13:11:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.266 13:11:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:38.266 [2024-07-25 13:11:29.733297] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:15:38.266 [2024-07-25 13:11:29.733788] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:15:38.266 [2024-07-25 13:11:29.733818] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:15:38.266 [2024-07-25 13:11:29.733829] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:15:38.266 [2024-07-25 13:11:29.742363] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:38.266 [2024-07-25 13:11:29.742400] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:38.266 [2024-07-25 13:11:29.748130] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:38.266 [2024-07-25 13:11:29.748862] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:15:38.266 [2024-07-25 13:11:29.758199] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:15:38.266 13:11:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.266 13:11:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:15:38.266 13:11:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:38.266 13:11:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:15:38.266 13:11:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.266 13:11:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:38.266 13:11:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.266 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:15:38.266 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:15:38.266 13:11:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.266 13:11:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:38.266 [2024-07-25 13:11:30.013431] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:15:38.266 [2024-07-25 13:11:30.014144] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:15:38.266 [2024-07-25 13:11:30.014176] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:15:38.266 [2024-07-25 13:11:30.014191] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:15:38.266 [2024-07-25 13:11:30.021208] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:38.266 [2024-07-25 13:11:30.021259] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:38.266 [2024-07-25 13:11:30.029187] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:38.266 [2024-07-25 13:11:30.030172] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:15:38.266 [2024-07-25 13:11:30.038134] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:15:38.266 13:11:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.266 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:15:38.266 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:15:38.266 13:11:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.266 13:11:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:38.266 13:11:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.266 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:15:38.266 { 00:15:38.266 "ublk_device": "/dev/ublkb0", 00:15:38.266 "id": 0, 00:15:38.266 "queue_depth": 512, 00:15:38.266 "num_queues": 4, 00:15:38.266 "bdev_name": "Malloc0" 00:15:38.266 }, 00:15:38.266 { 00:15:38.266 "ublk_device": "/dev/ublkb1", 00:15:38.266 "id": 1, 00:15:38.266 "queue_depth": 512, 00:15:38.266 "num_queues": 4, 00:15:38.266 "bdev_name": "Malloc1" 00:15:38.266 }, 00:15:38.266 { 00:15:38.266 "ublk_device": "/dev/ublkb2", 00:15:38.266 "id": 2, 00:15:38.266 "queue_depth": 512, 00:15:38.266 "num_queues": 4, 00:15:38.266 "bdev_name": "Malloc2" 00:15:38.266 }, 00:15:38.266 { 00:15:38.266 "ublk_device": "/dev/ublkb3", 00:15:38.266 "id": 3, 00:15:38.266 "queue_depth": 512, 00:15:38.266 "num_queues": 4, 00:15:38.266 "bdev_name": "Malloc3" 00:15:38.266 } 00:15:38.266 ]' 00:15:38.266 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:15:38.266 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:38.266 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:15:38.266 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:38.266 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:15:38.266 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:15:38.266 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:15:38.266 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:38.266 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:15:38.266 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:38.266 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:15:38.266 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:38.266 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:38.266 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:15:38.266 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:15:38.266 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:15:38.266 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:15:38.266 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:15:38.524 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:38.524 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:15:38.524 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:38.524 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:15:38.524 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:15:38.524 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:38.524 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:15:38.524 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:15:38.524 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:15:38.524 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:15:38.524 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:15:38.783 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:38.783 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:15:38.783 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:38.783 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:15:38.783 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:15:38.783 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:38.783 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:15:38.783 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:15:38.783 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:15:38.783 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:15:38.783 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:15:39.040 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:39.040 13:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:15:39.040 13:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:39.040 13:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:15:39.040 13:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:15:39.040 13:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:15:39.040 13:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:15:39.040 13:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:39.040 13:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:15:39.040 13:11:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.040 13:11:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:39.040 [2024-07-25 13:11:31.098389] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:39.040 [2024-07-25 13:11:31.138656] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:39.040 [2024-07-25 13:11:31.142496] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:39.040 [2024-07-25 13:11:31.148129] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:39.040 [2024-07-25 13:11:31.148536] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:39.041 [2024-07-25 13:11:31.148560] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:39.041 13:11:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.041 13:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:39.041 13:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:15:39.041 13:11:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.041 13:11:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:39.041 [2024-07-25 13:11:31.156253] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:39.041 [2024-07-25 13:11:31.208550] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:39.041 [2024-07-25 13:11:31.213478] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:39.041 [2024-07-25 13:11:31.222147] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:39.041 [2024-07-25 13:11:31.222572] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:39.041 [2024-07-25 13:11:31.222591] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:39.041 13:11:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.041 13:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:39.041 13:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:15:39.041 13:11:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.041 13:11:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:39.299 [2024-07-25 13:11:31.230306] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:15:39.299 [2024-07-25 13:11:31.276189] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:39.299 [2024-07-25 13:11:31.277528] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:15:39.299 [2024-07-25 13:11:31.285141] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:39.299 [2024-07-25 13:11:31.285544] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:15:39.299 [2024-07-25 13:11:31.285568] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:15:39.299 13:11:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.299 13:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:39.299 13:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:15:39.299 13:11:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.299 13:11:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:39.299 [2024-07-25 13:11:31.293331] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:15:39.299 [2024-07-25 13:11:31.326624] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:39.299 [2024-07-25 13:11:31.331465] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:15:39.299 [2024-07-25 13:11:31.339163] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:39.299 [2024-07-25 13:11:31.339517] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:15:39.299 [2024-07-25 13:11:31.339537] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:15:39.299 13:11:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.299 13:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:15:39.558 [2024-07-25 13:11:31.637272] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:15:39.558 [2024-07-25 13:11:31.645131] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:15:39.558 [2024-07-25 13:11:31.645187] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:39.558 13:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:15:39.558 13:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:39.558 13:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:39.558 13:11:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.558 13:11:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:39.816 13:11:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.816 13:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:39.816 13:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:15:39.816 13:11:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.816 13:11:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:40.386 13:11:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.386 13:11:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:40.386 13:11:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:15:40.386 13:11:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.386 13:11:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:40.644 13:11:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.644 13:11:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:40.644 13:11:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:15:40.644 13:11:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.644 13:11:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:40.903 13:11:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.903 13:11:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:15:40.903 13:11:32 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:40.903 13:11:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.903 13:11:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:40.903 13:11:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.903 13:11:32 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:40.903 13:11:32 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:15:40.903 13:11:32 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:40.903 13:11:32 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:40.903 13:11:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.903 13:11:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:40.903 13:11:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.903 13:11:32 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:40.903 13:11:32 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:15:40.903 13:11:33 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:40.903 00:15:40.903 real 0m4.095s 00:15:40.903 user 0m1.329s 00:15:40.903 sys 0m0.171s 00:15:40.903 13:11:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:40.903 13:11:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:40.903 ************************************ 00:15:40.903 END TEST test_create_multi_ublk 00:15:40.903 ************************************ 00:15:40.903 13:11:33 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:15:40.903 13:11:33 ublk -- ublk/ublk.sh@147 -- # cleanup 00:15:40.903 13:11:33 ublk -- ublk/ublk.sh@130 -- # killprocess 76456 00:15:40.903 13:11:33 ublk -- common/autotest_common.sh@950 -- # '[' -z 76456 ']' 00:15:40.903 13:11:33 ublk -- common/autotest_common.sh@954 -- # kill -0 76456 00:15:40.903 13:11:33 ublk -- common/autotest_common.sh@955 -- # uname 00:15:40.903 13:11:33 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:40.903 13:11:33 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76456 00:15:40.903 13:11:33 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:40.903 13:11:33 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:40.903 killing process with pid 76456 00:15:40.903 13:11:33 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76456' 00:15:40.903 13:11:33 ublk -- common/autotest_common.sh@969 -- # kill 76456 00:15:40.903 13:11:33 ublk -- common/autotest_common.sh@974 -- # wait 76456 00:15:42.275 [2024-07-25 13:11:34.055130] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:15:42.275 [2024-07-25 13:11:34.055205] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:15:43.207 00:15:43.207 real 0m27.531s 00:15:43.207 user 0m42.290s 00:15:43.207 sys 0m7.837s 00:15:43.207 13:11:35 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:43.207 13:11:35 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:43.207 ************************************ 00:15:43.207 END TEST ublk 00:15:43.207 ************************************ 00:15:43.207 13:11:35 -- spdk/autotest.sh@256 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:43.207 13:11:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:43.207 13:11:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:43.207 13:11:35 -- common/autotest_common.sh@10 -- # set +x 00:15:43.207 ************************************ 00:15:43.207 START TEST ublk_recovery 00:15:43.207 ************************************ 00:15:43.207 13:11:35 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:43.207 * Looking for test storage... 00:15:43.207 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:43.207 13:11:35 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:43.207 13:11:35 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:43.207 13:11:35 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:43.207 13:11:35 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:43.207 13:11:35 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:43.207 13:11:35 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:43.207 13:11:35 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:43.207 13:11:35 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:43.207 13:11:35 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:43.207 13:11:35 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:15:43.207 13:11:35 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=76843 00:15:43.207 13:11:35 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:43.207 13:11:35 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:43.207 13:11:35 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 76843 00:15:43.207 13:11:35 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 76843 ']' 00:15:43.207 13:11:35 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.207 13:11:35 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:43.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.207 13:11:35 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.207 13:11:35 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:43.207 13:11:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:43.465 [2024-07-25 13:11:35.445563] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:43.465 [2024-07-25 13:11:35.446249] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76843 ] 00:15:43.465 [2024-07-25 13:11:35.611618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:43.723 [2024-07-25 13:11:35.837481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.723 [2024-07-25 13:11:35.837492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.657 13:11:36 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:44.657 13:11:36 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:15:44.657 13:11:36 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:15:44.657 13:11:36 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.657 13:11:36 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.657 [2024-07-25 13:11:36.561130] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:44.657 [2024-07-25 13:11:36.563562] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:44.657 13:11:36 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.657 13:11:36 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:44.657 13:11:36 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.657 13:11:36 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.657 malloc0 00:15:44.657 13:11:36 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.657 13:11:36 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:15:44.657 13:11:36 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.657 13:11:36 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.657 [2024-07-25 13:11:36.689362] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:15:44.657 [2024-07-25 13:11:36.689497] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:15:44.657 [2024-07-25 13:11:36.689517] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:44.657 [2024-07-25 13:11:36.689538] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:44.657 [2024-07-25 13:11:36.698228] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:44.657 [2024-07-25 13:11:36.698273] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:44.657 [2024-07-25 13:11:36.705154] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:44.657 [2024-07-25 13:11:36.705358] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:44.657 [2024-07-25 13:11:36.721163] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:44.657 1 00:15:44.657 13:11:36 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.657 13:11:36 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:15:45.592 13:11:37 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=76878 00:15:45.592 13:11:37 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:15:45.592 13:11:37 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:15:45.850 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:45.850 fio-3.35 00:15:45.850 Starting 1 process 00:15:51.115 13:11:42 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 76843 00:15:51.115 13:11:42 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:15:56.373 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 76843 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:15:56.373 13:11:47 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=76988 00:15:56.373 13:11:47 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:56.373 13:11:47 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:56.373 13:11:47 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 76988 00:15:56.373 13:11:47 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 76988 ']' 00:15:56.373 13:11:47 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.373 13:11:47 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:56.373 13:11:47 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.373 13:11:47 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:56.373 13:11:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.373 [2024-07-25 13:11:47.841828] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:56.373 [2024-07-25 13:11:47.842567] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76988 ] 00:15:56.373 [2024-07-25 13:11:48.013345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:56.373 [2024-07-25 13:11:48.242442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.373 [2024-07-25 13:11:48.242448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.939 13:11:48 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:56.939 13:11:48 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:15:56.939 13:11:48 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:15:56.939 13:11:48 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.939 13:11:48 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.939 [2024-07-25 13:11:48.969130] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:56.939 [2024-07-25 13:11:48.971535] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:56.939 13:11:48 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.939 13:11:48 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:56.939 13:11:48 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.939 13:11:48 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.939 malloc0 00:15:56.939 13:11:49 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.939 13:11:49 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:15:56.939 13:11:49 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.939 13:11:49 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.939 [2024-07-25 13:11:49.103299] ublk.c:2077:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:15:56.939 [2024-07-25 13:11:49.103356] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:56.939 [2024-07-25 13:11:49.103370] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:15:56.939 [2024-07-25 13:11:49.111170] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:15:56.939 [2024-07-25 13:11:49.111199] ublk.c:2006:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:15:56.939 1 00:15:56.939 [2024-07-25 13:11:49.111294] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:15:56.939 13:11:49 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.939 13:11:49 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 76878 00:15:56.939 [2024-07-25 13:11:49.119148] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:15:56.939 [2024-07-25 13:11:49.126800] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:15:57.197 [2024-07-25 13:11:49.134410] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:15:57.197 [2024-07-25 13:11:49.134443] ublk.c: 379:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:16:53.441 00:16:53.441 fio_test: (groupid=0, jobs=1): err= 0: pid=76881: Thu Jul 25 13:12:37 2024 00:16:53.441 read: IOPS=18.0k, BW=70.2MiB/s (73.6MB/s)(4214MiB/60002msec) 00:16:53.441 slat (nsec): min=1899, max=1064.9k, avg=6487.67, stdev=3264.69 00:16:53.441 clat (usec): min=1179, max=6409.2k, avg=3513.79, stdev=50098.35 00:16:53.441 lat (usec): min=1186, max=6409.2k, avg=3520.28, stdev=50098.33 00:16:53.441 clat percentiles (usec): 00:16:53.441 | 1.00th=[ 2540], 5.00th=[ 2802], 10.00th=[ 2868], 20.00th=[ 2900], 00:16:53.441 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3032], 00:16:53.441 | 70.00th=[ 3064], 80.00th=[ 3130], 90.00th=[ 3326], 95.00th=[ 4080], 00:16:53.441 | 99.00th=[ 5735], 99.50th=[ 6390], 99.90th=[ 7767], 99.95th=[ 8717], 00:16:53.441 | 99.99th=[13304] 00:16:53.441 bw ( KiB/s): min=25584, max=84144, per=100.00%, avg=79994.45, stdev=7638.22, samples=107 00:16:53.441 iops : min= 6396, max=21036, avg=19998.61, stdev=1909.55, samples=107 00:16:53.441 write: IOPS=18.0k, BW=70.2MiB/s (73.6MB/s)(4211MiB/60002msec); 0 zone resets 00:16:53.441 slat (usec): min=2, max=1093, avg= 6.57, stdev= 3.16 00:16:53.441 clat (usec): min=1017, max=6409.2k, avg=3593.22, stdev=48572.46 00:16:53.441 lat (usec): min=1020, max=6409.2k, avg=3599.79, stdev=48572.45 00:16:53.441 clat percentiles (usec): 00:16:53.441 | 1.00th=[ 2573], 5.00th=[ 2900], 10.00th=[ 2966], 20.00th=[ 3032], 00:16:53.441 | 30.00th=[ 3064], 40.00th=[ 3097], 50.00th=[ 3130], 60.00th=[ 3163], 00:16:53.441 | 70.00th=[ 3195], 80.00th=[ 3228], 90.00th=[ 3392], 95.00th=[ 4047], 00:16:53.441 | 99.00th=[ 5669], 99.50th=[ 6456], 99.90th=[ 7767], 99.95th=[ 8717], 00:16:53.441 | 99.99th=[12911] 00:16:53.441 bw ( KiB/s): min=26184, max=83376, per=100.00%, avg=79939.21, stdev=7574.62, samples=107 00:16:53.441 iops : min= 6546, max=20844, avg=19984.79, stdev=1893.65, samples=107 00:16:53.441 lat (msec) : 2=0.08%, 4=94.60%, 10=5.30%, 20=0.01%, >=2000=0.01% 00:16:53.441 cpu : usr=10.19%, sys=21.91%, ctx=73948, majf=0, minf=13 00:16:53.441 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:16:53.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:53.441 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:53.441 issued rwts: total=1078743,1078058,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:53.441 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:53.441 00:16:53.441 Run status group 0 (all jobs): 00:16:53.441 READ: bw=70.2MiB/s (73.6MB/s), 70.2MiB/s-70.2MiB/s (73.6MB/s-73.6MB/s), io=4214MiB (4419MB), run=60002-60002msec 00:16:53.441 WRITE: bw=70.2MiB/s (73.6MB/s), 70.2MiB/s-70.2MiB/s (73.6MB/s-73.6MB/s), io=4211MiB (4416MB), run=60002-60002msec 00:16:53.441 00:16:53.441 Disk stats (read/write): 00:16:53.441 ublkb1: ios=1076394/1075793, merge=0/0, ticks=3688415/3651584, in_queue=7339999, util=99.93% 00:16:53.441 13:12:37 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:16:53.441 13:12:37 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.442 13:12:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:53.442 [2024-07-25 13:12:37.992606] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:53.442 [2024-07-25 13:12:38.031264] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:53.442 [2024-07-25 13:12:38.031564] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:53.442 [2024-07-25 13:12:38.040232] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:53.442 [2024-07-25 13:12:38.040421] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:53.442 [2024-07-25 13:12:38.040446] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:53.442 13:12:38 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.442 13:12:38 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:16:53.442 13:12:38 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.442 13:12:38 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:53.442 [2024-07-25 13:12:38.055260] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:16:53.442 [2024-07-25 13:12:38.066164] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:16:53.442 [2024-07-25 13:12:38.066238] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:53.442 13:12:38 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.442 13:12:38 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:16:53.442 13:12:38 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:16:53.442 13:12:38 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 76988 00:16:53.442 13:12:38 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 76988 ']' 00:16:53.442 13:12:38 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 76988 00:16:53.442 13:12:38 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:16:53.442 13:12:38 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:53.442 13:12:38 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76988 00:16:53.442 killing process with pid 76988 00:16:53.442 13:12:38 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:53.442 13:12:38 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:53.442 13:12:38 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76988' 00:16:53.442 13:12:38 ublk_recovery -- common/autotest_common.sh@969 -- # kill 76988 00:16:53.442 13:12:38 ublk_recovery -- common/autotest_common.sh@974 -- # wait 76988 00:16:53.442 [2024-07-25 13:12:39.095533] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:16:53.442 [2024-07-25 13:12:39.095603] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:16:53.442 00:16:53.442 real 1m5.130s 00:16:53.442 user 1m48.491s 00:16:53.442 sys 0m30.049s 00:16:53.442 13:12:40 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:53.442 13:12:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:53.442 ************************************ 00:16:53.442 END TEST ublk_recovery 00:16:53.442 ************************************ 00:16:53.442 13:12:40 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:16:53.442 13:12:40 -- spdk/autotest.sh@264 -- # timing_exit lib 00:16:53.442 13:12:40 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:53.442 13:12:40 -- common/autotest_common.sh@10 -- # set +x 00:16:53.442 13:12:40 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:16:53.442 13:12:40 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:16:53.442 13:12:40 -- spdk/autotest.sh@283 -- # '[' 0 -eq 1 ']' 00:16:53.442 13:12:40 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:16:53.442 13:12:40 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:16:53.442 13:12:40 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:16:53.442 13:12:40 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:16:53.442 13:12:40 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:16:53.442 13:12:40 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:16:53.442 13:12:40 -- spdk/autotest.sh@343 -- # '[' 1 -eq 1 ']' 00:16:53.442 13:12:40 -- spdk/autotest.sh@344 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:53.442 13:12:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:53.442 13:12:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:53.442 13:12:40 -- common/autotest_common.sh@10 -- # set +x 00:16:53.442 ************************************ 00:16:53.442 START TEST ftl 00:16:53.442 ************************************ 00:16:53.442 13:12:40 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:53.442 * Looking for test storage... 00:16:53.442 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:53.442 13:12:40 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:53.442 13:12:40 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:53.442 13:12:40 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:53.442 13:12:40 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:53.442 13:12:40 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:53.442 13:12:40 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:53.442 13:12:40 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:53.442 13:12:40 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:53.442 13:12:40 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:53.442 13:12:40 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:53.442 13:12:40 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:53.442 13:12:40 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:53.442 13:12:40 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:53.442 13:12:40 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:53.442 13:12:40 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:53.442 13:12:40 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:53.442 13:12:40 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:53.442 13:12:40 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:53.442 13:12:40 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:53.442 13:12:40 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:53.442 13:12:40 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:53.442 13:12:40 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:53.442 13:12:40 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:53.442 13:12:40 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:53.442 13:12:40 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:53.442 13:12:40 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:53.442 13:12:40 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:53.442 13:12:40 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:53.442 13:12:40 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:53.442 13:12:40 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:53.442 13:12:40 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:16:53.442 13:12:40 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:16:53.442 13:12:40 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:16:53.442 13:12:40 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:16:53.442 13:12:40 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:53.442 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:53.442 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:53.442 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:53.442 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:53.442 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:53.442 13:12:41 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=77773 00:16:53.442 13:12:41 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:16:53.442 13:12:41 ftl -- ftl/ftl.sh@38 -- # waitforlisten 77773 00:16:53.442 13:12:41 ftl -- common/autotest_common.sh@831 -- # '[' -z 77773 ']' 00:16:53.442 13:12:41 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.442 13:12:41 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:53.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.442 13:12:41 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.442 13:12:41 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:53.442 13:12:41 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:53.442 [2024-07-25 13:12:41.232881] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:53.442 [2024-07-25 13:12:41.233077] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77773 ] 00:16:53.442 [2024-07-25 13:12:41.404676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.442 [2024-07-25 13:12:41.620973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.442 13:12:42 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:53.442 13:12:42 ftl -- common/autotest_common.sh@864 -- # return 0 00:16:53.442 13:12:42 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:16:53.442 13:12:42 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:53.442 13:12:43 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:53.442 13:12:43 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:16:53.442 13:12:43 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:16:53.442 13:12:43 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:53.443 13:12:43 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:53.443 13:12:44 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:16:53.443 13:12:44 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:16:53.443 13:12:44 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:16:53.443 13:12:44 ftl -- ftl/ftl.sh@50 -- # break 00:16:53.443 13:12:44 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:16:53.443 13:12:44 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:16:53.443 13:12:44 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:53.443 13:12:44 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:53.443 13:12:44 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:16:53.443 13:12:44 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:16:53.443 13:12:44 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:16:53.443 13:12:44 ftl -- ftl/ftl.sh@63 -- # break 00:16:53.443 13:12:44 ftl -- ftl/ftl.sh@66 -- # killprocess 77773 00:16:53.443 13:12:44 ftl -- common/autotest_common.sh@950 -- # '[' -z 77773 ']' 00:16:53.443 13:12:44 ftl -- common/autotest_common.sh@954 -- # kill -0 77773 00:16:53.443 13:12:44 ftl -- common/autotest_common.sh@955 -- # uname 00:16:53.443 13:12:44 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:53.443 13:12:44 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77773 00:16:53.443 13:12:44 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:53.443 13:12:44 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:53.443 killing process with pid 77773 00:16:53.443 13:12:44 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77773' 00:16:53.443 13:12:44 ftl -- common/autotest_common.sh@969 -- # kill 77773 00:16:53.443 13:12:44 ftl -- common/autotest_common.sh@974 -- # wait 77773 00:16:54.813 13:12:46 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:16:54.813 13:12:46 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:54.813 13:12:46 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:54.813 13:12:46 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:54.813 13:12:46 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:54.813 ************************************ 00:16:54.813 START TEST ftl_fio_basic 00:16:54.813 ************************************ 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:54.813 * Looking for test storage... 00:16:54.813 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:16:54.813 13:12:46 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:16:54.814 13:12:46 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:54.814 13:12:46 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:54.814 13:12:46 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:54.814 13:12:46 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=77915 00:16:54.814 13:12:46 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:16:54.814 13:12:46 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 77915 00:16:54.814 13:12:46 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 77915 ']' 00:16:54.814 13:12:46 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.814 13:12:46 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:54.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.814 13:12:46 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.814 13:12:46 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:54.814 13:12:46 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:54.814 [2024-07-25 13:12:46.818404] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:54.814 [2024-07-25 13:12:46.818568] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77915 ] 00:16:54.814 [2024-07-25 13:12:46.979178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:55.071 [2024-07-25 13:12:47.169803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:55.071 [2024-07-25 13:12:47.169891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.071 [2024-07-25 13:12:47.169895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:56.004 13:12:47 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:56.004 13:12:47 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:16:56.004 13:12:47 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:56.004 13:12:47 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:16:56.004 13:12:47 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:56.004 13:12:47 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:16:56.004 13:12:47 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:16:56.004 13:12:47 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:56.262 13:12:48 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:56.262 13:12:48 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:16:56.262 13:12:48 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:56.262 13:12:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:56.262 13:12:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:56.262 13:12:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:56.262 13:12:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:56.262 13:12:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:56.520 13:12:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:56.520 { 00:16:56.520 "name": "nvme0n1", 00:16:56.520 "aliases": [ 00:16:56.520 "c3b104f1-1e55-431c-bb09-1446bf96ab01" 00:16:56.520 ], 00:16:56.520 "product_name": "NVMe disk", 00:16:56.520 "block_size": 4096, 00:16:56.520 "num_blocks": 1310720, 00:16:56.520 "uuid": "c3b104f1-1e55-431c-bb09-1446bf96ab01", 00:16:56.520 "assigned_rate_limits": { 00:16:56.520 "rw_ios_per_sec": 0, 00:16:56.520 "rw_mbytes_per_sec": 0, 00:16:56.520 "r_mbytes_per_sec": 0, 00:16:56.520 "w_mbytes_per_sec": 0 00:16:56.520 }, 00:16:56.520 "claimed": false, 00:16:56.520 "zoned": false, 00:16:56.520 "supported_io_types": { 00:16:56.520 "read": true, 00:16:56.520 "write": true, 00:16:56.520 "unmap": true, 00:16:56.520 "flush": true, 00:16:56.520 "reset": true, 00:16:56.520 "nvme_admin": true, 00:16:56.520 "nvme_io": true, 00:16:56.520 "nvme_io_md": false, 00:16:56.520 "write_zeroes": true, 00:16:56.520 "zcopy": false, 00:16:56.520 "get_zone_info": false, 00:16:56.520 "zone_management": false, 00:16:56.520 "zone_append": false, 00:16:56.520 "compare": true, 00:16:56.520 "compare_and_write": false, 00:16:56.520 "abort": true, 00:16:56.520 "seek_hole": false, 00:16:56.520 "seek_data": false, 00:16:56.520 "copy": true, 00:16:56.520 "nvme_iov_md": false 00:16:56.520 }, 00:16:56.520 "driver_specific": { 00:16:56.520 "nvme": [ 00:16:56.520 { 00:16:56.520 "pci_address": "0000:00:11.0", 00:16:56.520 "trid": { 00:16:56.520 "trtype": "PCIe", 00:16:56.520 "traddr": "0000:00:11.0" 00:16:56.520 }, 00:16:56.520 "ctrlr_data": { 00:16:56.520 "cntlid": 0, 00:16:56.520 "vendor_id": "0x1b36", 00:16:56.520 "model_number": "QEMU NVMe Ctrl", 00:16:56.520 "serial_number": "12341", 00:16:56.520 "firmware_revision": "8.0.0", 00:16:56.520 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:56.520 "oacs": { 00:16:56.520 "security": 0, 00:16:56.520 "format": 1, 00:16:56.520 "firmware": 0, 00:16:56.520 "ns_manage": 1 00:16:56.520 }, 00:16:56.520 "multi_ctrlr": false, 00:16:56.520 "ana_reporting": false 00:16:56.520 }, 00:16:56.520 "vs": { 00:16:56.520 "nvme_version": "1.4" 00:16:56.520 }, 00:16:56.520 "ns_data": { 00:16:56.520 "id": 1, 00:16:56.520 "can_share": false 00:16:56.520 } 00:16:56.520 } 00:16:56.520 ], 00:16:56.520 "mp_policy": "active_passive" 00:16:56.520 } 00:16:56.520 } 00:16:56.520 ]' 00:16:56.520 13:12:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:56.520 13:12:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:56.520 13:12:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:56.520 13:12:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:56.520 13:12:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:56.520 13:12:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:16:56.520 13:12:48 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:16:56.520 13:12:48 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:56.520 13:12:48 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:16:56.520 13:12:48 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:56.520 13:12:48 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:56.778 13:12:48 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:16:56.778 13:12:48 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:57.035 13:12:49 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=ead6d8ae-be6b-40b2-af99-39e83d3b2c78 00:16:57.035 13:12:49 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ead6d8ae-be6b-40b2-af99-39e83d3b2c78 00:16:57.601 13:12:49 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=1c04990f-501a-4031-958a-9887a4b412b7 00:16:57.601 13:12:49 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 1c04990f-501a-4031-958a-9887a4b412b7 00:16:57.601 13:12:49 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:16:57.601 13:12:49 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:57.601 13:12:49 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=1c04990f-501a-4031-958a-9887a4b412b7 00:16:57.601 13:12:49 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:16:57.601 13:12:49 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 1c04990f-501a-4031-958a-9887a4b412b7 00:16:57.601 13:12:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=1c04990f-501a-4031-958a-9887a4b412b7 00:16:57.601 13:12:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:57.601 13:12:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:57.601 13:12:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:57.601 13:12:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1c04990f-501a-4031-958a-9887a4b412b7 00:16:57.859 13:12:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:57.859 { 00:16:57.859 "name": "1c04990f-501a-4031-958a-9887a4b412b7", 00:16:57.859 "aliases": [ 00:16:57.859 "lvs/nvme0n1p0" 00:16:57.859 ], 00:16:57.859 "product_name": "Logical Volume", 00:16:57.859 "block_size": 4096, 00:16:57.859 "num_blocks": 26476544, 00:16:57.859 "uuid": "1c04990f-501a-4031-958a-9887a4b412b7", 00:16:57.859 "assigned_rate_limits": { 00:16:57.859 "rw_ios_per_sec": 0, 00:16:57.859 "rw_mbytes_per_sec": 0, 00:16:57.859 "r_mbytes_per_sec": 0, 00:16:57.859 "w_mbytes_per_sec": 0 00:16:57.859 }, 00:16:57.859 "claimed": false, 00:16:57.859 "zoned": false, 00:16:57.859 "supported_io_types": { 00:16:57.859 "read": true, 00:16:57.859 "write": true, 00:16:57.859 "unmap": true, 00:16:57.859 "flush": false, 00:16:57.859 "reset": true, 00:16:57.859 "nvme_admin": false, 00:16:57.859 "nvme_io": false, 00:16:57.859 "nvme_io_md": false, 00:16:57.859 "write_zeroes": true, 00:16:57.859 "zcopy": false, 00:16:57.859 "get_zone_info": false, 00:16:57.859 "zone_management": false, 00:16:57.859 "zone_append": false, 00:16:57.859 "compare": false, 00:16:57.859 "compare_and_write": false, 00:16:57.859 "abort": false, 00:16:57.859 "seek_hole": true, 00:16:57.859 "seek_data": true, 00:16:57.859 "copy": false, 00:16:57.859 "nvme_iov_md": false 00:16:57.859 }, 00:16:57.859 "driver_specific": { 00:16:57.859 "lvol": { 00:16:57.859 "lvol_store_uuid": "ead6d8ae-be6b-40b2-af99-39e83d3b2c78", 00:16:57.859 "base_bdev": "nvme0n1", 00:16:57.859 "thin_provision": true, 00:16:57.859 "num_allocated_clusters": 0, 00:16:57.859 "snapshot": false, 00:16:57.859 "clone": false, 00:16:57.859 "esnap_clone": false 00:16:57.859 } 00:16:57.859 } 00:16:57.859 } 00:16:57.859 ]' 00:16:57.859 13:12:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:57.859 13:12:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:57.859 13:12:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:57.859 13:12:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:57.859 13:12:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:57.859 13:12:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:16:57.859 13:12:49 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:16:57.859 13:12:49 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:16:57.859 13:12:49 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:58.212 13:12:50 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:58.212 13:12:50 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:58.212 13:12:50 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 1c04990f-501a-4031-958a-9887a4b412b7 00:16:58.212 13:12:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=1c04990f-501a-4031-958a-9887a4b412b7 00:16:58.212 13:12:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:58.212 13:12:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:58.212 13:12:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:58.212 13:12:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1c04990f-501a-4031-958a-9887a4b412b7 00:16:58.495 13:12:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:58.495 { 00:16:58.495 "name": "1c04990f-501a-4031-958a-9887a4b412b7", 00:16:58.495 "aliases": [ 00:16:58.495 "lvs/nvme0n1p0" 00:16:58.495 ], 00:16:58.495 "product_name": "Logical Volume", 00:16:58.495 "block_size": 4096, 00:16:58.495 "num_blocks": 26476544, 00:16:58.495 "uuid": "1c04990f-501a-4031-958a-9887a4b412b7", 00:16:58.495 "assigned_rate_limits": { 00:16:58.495 "rw_ios_per_sec": 0, 00:16:58.495 "rw_mbytes_per_sec": 0, 00:16:58.495 "r_mbytes_per_sec": 0, 00:16:58.495 "w_mbytes_per_sec": 0 00:16:58.495 }, 00:16:58.495 "claimed": false, 00:16:58.495 "zoned": false, 00:16:58.495 "supported_io_types": { 00:16:58.495 "read": true, 00:16:58.495 "write": true, 00:16:58.495 "unmap": true, 00:16:58.495 "flush": false, 00:16:58.495 "reset": true, 00:16:58.495 "nvme_admin": false, 00:16:58.495 "nvme_io": false, 00:16:58.495 "nvme_io_md": false, 00:16:58.495 "write_zeroes": true, 00:16:58.495 "zcopy": false, 00:16:58.495 "get_zone_info": false, 00:16:58.495 "zone_management": false, 00:16:58.495 "zone_append": false, 00:16:58.495 "compare": false, 00:16:58.495 "compare_and_write": false, 00:16:58.495 "abort": false, 00:16:58.495 "seek_hole": true, 00:16:58.495 "seek_data": true, 00:16:58.495 "copy": false, 00:16:58.495 "nvme_iov_md": false 00:16:58.495 }, 00:16:58.495 "driver_specific": { 00:16:58.495 "lvol": { 00:16:58.495 "lvol_store_uuid": "ead6d8ae-be6b-40b2-af99-39e83d3b2c78", 00:16:58.495 "base_bdev": "nvme0n1", 00:16:58.495 "thin_provision": true, 00:16:58.495 "num_allocated_clusters": 0, 00:16:58.495 "snapshot": false, 00:16:58.495 "clone": false, 00:16:58.495 "esnap_clone": false 00:16:58.495 } 00:16:58.495 } 00:16:58.495 } 00:16:58.495 ]' 00:16:58.495 13:12:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:58.495 13:12:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:58.495 13:12:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:58.495 13:12:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:58.495 13:12:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:58.495 13:12:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:16:58.495 13:12:50 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:16:58.495 13:12:50 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:58.755 13:12:50 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:16:58.755 13:12:50 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:16:58.755 13:12:50 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:16:58.755 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:16:58.755 13:12:50 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 1c04990f-501a-4031-958a-9887a4b412b7 00:16:58.755 13:12:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=1c04990f-501a-4031-958a-9887a4b412b7 00:16:58.755 13:12:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:58.755 13:12:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:58.755 13:12:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:58.755 13:12:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1c04990f-501a-4031-958a-9887a4b412b7 00:16:59.012 13:12:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:59.012 { 00:16:59.012 "name": "1c04990f-501a-4031-958a-9887a4b412b7", 00:16:59.013 "aliases": [ 00:16:59.013 "lvs/nvme0n1p0" 00:16:59.013 ], 00:16:59.013 "product_name": "Logical Volume", 00:16:59.013 "block_size": 4096, 00:16:59.013 "num_blocks": 26476544, 00:16:59.013 "uuid": "1c04990f-501a-4031-958a-9887a4b412b7", 00:16:59.013 "assigned_rate_limits": { 00:16:59.013 "rw_ios_per_sec": 0, 00:16:59.013 "rw_mbytes_per_sec": 0, 00:16:59.013 "r_mbytes_per_sec": 0, 00:16:59.013 "w_mbytes_per_sec": 0 00:16:59.013 }, 00:16:59.013 "claimed": false, 00:16:59.013 "zoned": false, 00:16:59.013 "supported_io_types": { 00:16:59.013 "read": true, 00:16:59.013 "write": true, 00:16:59.013 "unmap": true, 00:16:59.013 "flush": false, 00:16:59.013 "reset": true, 00:16:59.013 "nvme_admin": false, 00:16:59.013 "nvme_io": false, 00:16:59.013 "nvme_io_md": false, 00:16:59.013 "write_zeroes": true, 00:16:59.013 "zcopy": false, 00:16:59.013 "get_zone_info": false, 00:16:59.013 "zone_management": false, 00:16:59.013 "zone_append": false, 00:16:59.013 "compare": false, 00:16:59.013 "compare_and_write": false, 00:16:59.013 "abort": false, 00:16:59.013 "seek_hole": true, 00:16:59.013 "seek_data": true, 00:16:59.013 "copy": false, 00:16:59.013 "nvme_iov_md": false 00:16:59.013 }, 00:16:59.013 "driver_specific": { 00:16:59.013 "lvol": { 00:16:59.013 "lvol_store_uuid": "ead6d8ae-be6b-40b2-af99-39e83d3b2c78", 00:16:59.013 "base_bdev": "nvme0n1", 00:16:59.013 "thin_provision": true, 00:16:59.013 "num_allocated_clusters": 0, 00:16:59.013 "snapshot": false, 00:16:59.013 "clone": false, 00:16:59.013 "esnap_clone": false 00:16:59.013 } 00:16:59.013 } 00:16:59.013 } 00:16:59.013 ]' 00:16:59.013 13:12:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:59.270 13:12:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:59.270 13:12:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:59.270 13:12:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:59.270 13:12:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:59.270 13:12:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:16:59.270 13:12:51 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:16:59.270 13:12:51 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:16:59.270 13:12:51 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1c04990f-501a-4031-958a-9887a4b412b7 -c nvc0n1p0 --l2p_dram_limit 60 00:16:59.528 [2024-07-25 13:12:51.503738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.528 [2024-07-25 13:12:51.503807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:59.528 [2024-07-25 13:12:51.503830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:59.528 [2024-07-25 13:12:51.503845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.528 [2024-07-25 13:12:51.503942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.528 [2024-07-25 13:12:51.503963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:59.528 [2024-07-25 13:12:51.503976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:16:59.528 [2024-07-25 13:12:51.503989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.528 [2024-07-25 13:12:51.504025] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:59.528 [2024-07-25 13:12:51.505070] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:59.528 [2024-07-25 13:12:51.505125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.528 [2024-07-25 13:12:51.505150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:59.528 [2024-07-25 13:12:51.505164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.106 ms 00:16:59.528 [2024-07-25 13:12:51.505177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.528 [2024-07-25 13:12:51.505314] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 563b33f0-f637-4c53-bf97-a9f399c93bd7 00:16:59.528 [2024-07-25 13:12:51.506382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.528 [2024-07-25 13:12:51.506423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:59.528 [2024-07-25 13:12:51.506443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:16:59.528 [2024-07-25 13:12:51.506456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.528 [2024-07-25 13:12:51.511083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.528 [2024-07-25 13:12:51.511139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:59.528 [2024-07-25 13:12:51.511164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.549 ms 00:16:59.528 [2024-07-25 13:12:51.511176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.528 [2024-07-25 13:12:51.511312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.528 [2024-07-25 13:12:51.511333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:59.528 [2024-07-25 13:12:51.511349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:16:59.528 [2024-07-25 13:12:51.511361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.528 [2024-07-25 13:12:51.511445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.528 [2024-07-25 13:12:51.511469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:59.528 [2024-07-25 13:12:51.511485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:16:59.528 [2024-07-25 13:12:51.511500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.528 [2024-07-25 13:12:51.511542] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:59.528 [2024-07-25 13:12:51.516073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.528 [2024-07-25 13:12:51.516138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:59.528 [2024-07-25 13:12:51.516156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.542 ms 00:16:59.528 [2024-07-25 13:12:51.516170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.528 [2024-07-25 13:12:51.516230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.528 [2024-07-25 13:12:51.516254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:59.528 [2024-07-25 13:12:51.516267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:59.528 [2024-07-25 13:12:51.516280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.528 [2024-07-25 13:12:51.516360] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:59.528 [2024-07-25 13:12:51.516544] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:59.528 [2024-07-25 13:12:51.516575] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:59.528 [2024-07-25 13:12:51.516597] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:16:59.528 [2024-07-25 13:12:51.516613] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:59.528 [2024-07-25 13:12:51.516632] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:59.528 [2024-07-25 13:12:51.516644] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:59.528 [2024-07-25 13:12:51.516657] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:59.528 [2024-07-25 13:12:51.516671] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:59.528 [2024-07-25 13:12:51.516684] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:59.528 [2024-07-25 13:12:51.516696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.528 [2024-07-25 13:12:51.516710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:59.528 [2024-07-25 13:12:51.516722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:16:59.528 [2024-07-25 13:12:51.516734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.528 [2024-07-25 13:12:51.516835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.528 [2024-07-25 13:12:51.516852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:59.528 [2024-07-25 13:12:51.516865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:16:59.528 [2024-07-25 13:12:51.516878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.528 [2024-07-25 13:12:51.517000] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:59.528 [2024-07-25 13:12:51.517032] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:59.528 [2024-07-25 13:12:51.517047] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:59.528 [2024-07-25 13:12:51.517061] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.528 [2024-07-25 13:12:51.517073] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:59.528 [2024-07-25 13:12:51.517085] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:59.528 [2024-07-25 13:12:51.517097] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:59.528 [2024-07-25 13:12:51.517124] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:59.528 [2024-07-25 13:12:51.517138] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:59.528 [2024-07-25 13:12:51.517151] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:59.528 [2024-07-25 13:12:51.517162] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:59.528 [2024-07-25 13:12:51.517184] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:59.528 [2024-07-25 13:12:51.517195] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:59.528 [2024-07-25 13:12:51.517208] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:59.528 [2024-07-25 13:12:51.517219] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:59.528 [2024-07-25 13:12:51.517236] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.528 [2024-07-25 13:12:51.517247] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:59.529 [2024-07-25 13:12:51.517261] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:59.529 [2024-07-25 13:12:51.517272] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.529 [2024-07-25 13:12:51.517285] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:59.529 [2024-07-25 13:12:51.517295] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:59.529 [2024-07-25 13:12:51.517308] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:59.529 [2024-07-25 13:12:51.517319] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:59.529 [2024-07-25 13:12:51.517331] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:59.529 [2024-07-25 13:12:51.517342] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:59.529 [2024-07-25 13:12:51.517365] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:59.529 [2024-07-25 13:12:51.517376] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:59.529 [2024-07-25 13:12:51.517388] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:59.529 [2024-07-25 13:12:51.517398] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:59.529 [2024-07-25 13:12:51.517411] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:59.529 [2024-07-25 13:12:51.517422] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:59.529 [2024-07-25 13:12:51.517435] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:59.529 [2024-07-25 13:12:51.517445] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:59.529 [2024-07-25 13:12:51.517459] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:59.529 [2024-07-25 13:12:51.517470] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:59.529 [2024-07-25 13:12:51.517483] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:59.529 [2024-07-25 13:12:51.517493] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:59.529 [2024-07-25 13:12:51.517507] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:59.529 [2024-07-25 13:12:51.517518] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:59.529 [2024-07-25 13:12:51.517530] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.529 [2024-07-25 13:12:51.517541] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:59.529 [2024-07-25 13:12:51.517553] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:59.529 [2024-07-25 13:12:51.517564] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.529 [2024-07-25 13:12:51.517576] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:59.529 [2024-07-25 13:12:51.517587] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:59.529 [2024-07-25 13:12:51.517622] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:59.529 [2024-07-25 13:12:51.517634] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.529 [2024-07-25 13:12:51.517649] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:59.529 [2024-07-25 13:12:51.517660] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:59.529 [2024-07-25 13:12:51.517675] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:59.529 [2024-07-25 13:12:51.517685] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:59.529 [2024-07-25 13:12:51.517698] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:59.529 [2024-07-25 13:12:51.517708] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:59.529 [2024-07-25 13:12:51.517725] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:59.529 [2024-07-25 13:12:51.517740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:59.529 [2024-07-25 13:12:51.517758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:59.529 [2024-07-25 13:12:51.517770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:59.529 [2024-07-25 13:12:51.517784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:59.529 [2024-07-25 13:12:51.517796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:59.529 [2024-07-25 13:12:51.517811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:59.529 [2024-07-25 13:12:51.517823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:59.529 [2024-07-25 13:12:51.517836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:59.529 [2024-07-25 13:12:51.517848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:59.529 [2024-07-25 13:12:51.517861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:59.529 [2024-07-25 13:12:51.517875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:59.529 [2024-07-25 13:12:51.517892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:59.529 [2024-07-25 13:12:51.517903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:59.529 [2024-07-25 13:12:51.517916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:59.529 [2024-07-25 13:12:51.517928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:59.529 [2024-07-25 13:12:51.517941] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:59.529 [2024-07-25 13:12:51.517954] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:59.529 [2024-07-25 13:12:51.517968] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:59.529 [2024-07-25 13:12:51.517979] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:59.529 [2024-07-25 13:12:51.517993] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:59.529 [2024-07-25 13:12:51.518005] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:59.529 [2024-07-25 13:12:51.518020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.529 [2024-07-25 13:12:51.518031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:59.529 [2024-07-25 13:12:51.518045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.083 ms 00:16:59.529 [2024-07-25 13:12:51.518057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.529 [2024-07-25 13:12:51.518158] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:59.529 [2024-07-25 13:12:51.518176] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:03.712 [2024-07-25 13:12:55.269531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.712 [2024-07-25 13:12:55.269608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:03.712 [2024-07-25 13:12:55.269635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3751.388 ms 00:17:03.712 [2024-07-25 13:12:55.269649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.712 [2024-07-25 13:12:55.302495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.712 [2024-07-25 13:12:55.302558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:03.712 [2024-07-25 13:12:55.302585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.559 ms 00:17:03.712 [2024-07-25 13:12:55.302598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.712 [2024-07-25 13:12:55.302810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.712 [2024-07-25 13:12:55.302831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:03.712 [2024-07-25 13:12:55.302847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:03.712 [2024-07-25 13:12:55.302862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.712 [2024-07-25 13:12:55.352177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.712 [2024-07-25 13:12:55.352242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:03.712 [2024-07-25 13:12:55.352269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.239 ms 00:17:03.712 [2024-07-25 13:12:55.352282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.712 [2024-07-25 13:12:55.352357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.712 [2024-07-25 13:12:55.352373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:03.712 [2024-07-25 13:12:55.352389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:03.712 [2024-07-25 13:12:55.352400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.712 [2024-07-25 13:12:55.352831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.712 [2024-07-25 13:12:55.352854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:03.712 [2024-07-25 13:12:55.352870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:17:03.712 [2024-07-25 13:12:55.352882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.712 [2024-07-25 13:12:55.353066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.712 [2024-07-25 13:12:55.353087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:03.712 [2024-07-25 13:12:55.353103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:17:03.712 [2024-07-25 13:12:55.353142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.712 [2024-07-25 13:12:55.371651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.712 [2024-07-25 13:12:55.371735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:03.712 [2024-07-25 13:12:55.371763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.462 ms 00:17:03.712 [2024-07-25 13:12:55.371788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.712 [2024-07-25 13:12:55.385715] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:03.712 [2024-07-25 13:12:55.400026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.712 [2024-07-25 13:12:55.400131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:03.712 [2024-07-25 13:12:55.400156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.069 ms 00:17:03.712 [2024-07-25 13:12:55.400182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.712 [2024-07-25 13:12:55.456872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.712 [2024-07-25 13:12:55.456959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:03.712 [2024-07-25 13:12:55.456982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.615 ms 00:17:03.712 [2024-07-25 13:12:55.456997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.712 [2024-07-25 13:12:55.457306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.712 [2024-07-25 13:12:55.457331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:03.712 [2024-07-25 13:12:55.457345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:17:03.712 [2024-07-25 13:12:55.457363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.712 [2024-07-25 13:12:55.490217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.712 [2024-07-25 13:12:55.490307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:03.712 [2024-07-25 13:12:55.490330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.723 ms 00:17:03.712 [2024-07-25 13:12:55.490345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.712 [2024-07-25 13:12:55.522456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.712 [2024-07-25 13:12:55.522541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:03.712 [2024-07-25 13:12:55.522566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.026 ms 00:17:03.712 [2024-07-25 13:12:55.522580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.712 [2024-07-25 13:12:55.523345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.712 [2024-07-25 13:12:55.523377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:03.712 [2024-07-25 13:12:55.523397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.692 ms 00:17:03.712 [2024-07-25 13:12:55.523411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.712 [2024-07-25 13:12:55.615003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.712 [2024-07-25 13:12:55.615096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:03.712 [2024-07-25 13:12:55.615142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.496 ms 00:17:03.712 [2024-07-25 13:12:55.615176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.712 [2024-07-25 13:12:55.648487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.712 [2024-07-25 13:12:55.648561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:03.712 [2024-07-25 13:12:55.648584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.205 ms 00:17:03.712 [2024-07-25 13:12:55.648599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.712 [2024-07-25 13:12:55.680361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.712 [2024-07-25 13:12:55.680431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:03.712 [2024-07-25 13:12:55.680452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.690 ms 00:17:03.712 [2024-07-25 13:12:55.680467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.712 [2024-07-25 13:12:55.712198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.712 [2024-07-25 13:12:55.712270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:03.712 [2024-07-25 13:12:55.712292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.665 ms 00:17:03.712 [2024-07-25 13:12:55.712306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.712 [2024-07-25 13:12:55.712387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.712 [2024-07-25 13:12:55.712409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:03.712 [2024-07-25 13:12:55.712422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:03.712 [2024-07-25 13:12:55.712439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.712 [2024-07-25 13:12:55.712587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.712 [2024-07-25 13:12:55.712612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:03.712 [2024-07-25 13:12:55.712625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:17:03.712 [2024-07-25 13:12:55.712638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.712 [2024-07-25 13:12:55.713740] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4209.523 ms, result 0 00:17:03.712 { 00:17:03.712 "name": "ftl0", 00:17:03.712 "uuid": "563b33f0-f637-4c53-bf97-a9f399c93bd7" 00:17:03.712 } 00:17:03.712 13:12:55 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:17:03.712 13:12:55 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:17:03.712 13:12:55 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:03.712 13:12:55 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:17:03.712 13:12:55 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:03.712 13:12:55 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:03.713 13:12:55 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:03.971 13:12:55 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:04.230 [ 00:17:04.230 { 00:17:04.230 "name": "ftl0", 00:17:04.230 "aliases": [ 00:17:04.230 "563b33f0-f637-4c53-bf97-a9f399c93bd7" 00:17:04.230 ], 00:17:04.230 "product_name": "FTL disk", 00:17:04.230 "block_size": 4096, 00:17:04.230 "num_blocks": 20971520, 00:17:04.230 "uuid": "563b33f0-f637-4c53-bf97-a9f399c93bd7", 00:17:04.230 "assigned_rate_limits": { 00:17:04.230 "rw_ios_per_sec": 0, 00:17:04.230 "rw_mbytes_per_sec": 0, 00:17:04.230 "r_mbytes_per_sec": 0, 00:17:04.230 "w_mbytes_per_sec": 0 00:17:04.230 }, 00:17:04.230 "claimed": false, 00:17:04.230 "zoned": false, 00:17:04.230 "supported_io_types": { 00:17:04.230 "read": true, 00:17:04.230 "write": true, 00:17:04.230 "unmap": true, 00:17:04.230 "flush": true, 00:17:04.230 "reset": false, 00:17:04.230 "nvme_admin": false, 00:17:04.230 "nvme_io": false, 00:17:04.230 "nvme_io_md": false, 00:17:04.230 "write_zeroes": true, 00:17:04.230 "zcopy": false, 00:17:04.230 "get_zone_info": false, 00:17:04.230 "zone_management": false, 00:17:04.230 "zone_append": false, 00:17:04.230 "compare": false, 00:17:04.230 "compare_and_write": false, 00:17:04.230 "abort": false, 00:17:04.230 "seek_hole": false, 00:17:04.230 "seek_data": false, 00:17:04.230 "copy": false, 00:17:04.230 "nvme_iov_md": false 00:17:04.230 }, 00:17:04.230 "driver_specific": { 00:17:04.230 "ftl": { 00:17:04.230 "base_bdev": "1c04990f-501a-4031-958a-9887a4b412b7", 00:17:04.230 "cache": "nvc0n1p0" 00:17:04.230 } 00:17:04.230 } 00:17:04.230 } 00:17:04.230 ] 00:17:04.230 13:12:56 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:17:04.230 13:12:56 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:17:04.230 13:12:56 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:04.488 13:12:56 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:17:04.488 13:12:56 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:04.747 [2024-07-25 13:12:56.835359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.747 [2024-07-25 13:12:56.835434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:04.747 [2024-07-25 13:12:56.835465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:04.747 [2024-07-25 13:12:56.835478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.747 [2024-07-25 13:12:56.835527] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:04.747 [2024-07-25 13:12:56.838916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.747 [2024-07-25 13:12:56.838960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:04.747 [2024-07-25 13:12:56.838977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.365 ms 00:17:04.747 [2024-07-25 13:12:56.838991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.747 [2024-07-25 13:12:56.839503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.747 [2024-07-25 13:12:56.839538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:04.747 [2024-07-25 13:12:56.839552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.474 ms 00:17:04.747 [2024-07-25 13:12:56.839569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.747 [2024-07-25 13:12:56.842880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.747 [2024-07-25 13:12:56.842917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:04.747 [2024-07-25 13:12:56.842933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.282 ms 00:17:04.747 [2024-07-25 13:12:56.842946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.747 [2024-07-25 13:12:56.849663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.747 [2024-07-25 13:12:56.849706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:04.747 [2024-07-25 13:12:56.849722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.681 ms 00:17:04.747 [2024-07-25 13:12:56.849743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.747 [2024-07-25 13:12:56.882143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.747 [2024-07-25 13:12:56.882232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:04.747 [2024-07-25 13:12:56.882254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.278 ms 00:17:04.747 [2024-07-25 13:12:56.882269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.747 [2024-07-25 13:12:56.901821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.747 [2024-07-25 13:12:56.901919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:04.747 [2024-07-25 13:12:56.901942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.457 ms 00:17:04.747 [2024-07-25 13:12:56.901957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.747 [2024-07-25 13:12:56.902310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.747 [2024-07-25 13:12:56.902344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:04.747 [2024-07-25 13:12:56.902360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:17:04.747 [2024-07-25 13:12:56.902374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.747 [2024-07-25 13:12:56.934370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.747 [2024-07-25 13:12:56.934452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:04.747 [2024-07-25 13:12:56.934474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.961 ms 00:17:04.747 [2024-07-25 13:12:56.934488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.007 [2024-07-25 13:12:56.966365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.007 [2024-07-25 13:12:56.966455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:05.007 [2024-07-25 13:12:56.966478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.791 ms 00:17:05.007 [2024-07-25 13:12:56.966492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.007 [2024-07-25 13:12:56.998440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.007 [2024-07-25 13:12:56.998522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:05.007 [2024-07-25 13:12:56.998544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.862 ms 00:17:05.007 [2024-07-25 13:12:56.998559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.007 [2024-07-25 13:12:57.030862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.007 [2024-07-25 13:12:57.030934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:05.007 [2024-07-25 13:12:57.030971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.108 ms 00:17:05.007 [2024-07-25 13:12:57.030985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.007 [2024-07-25 13:12:57.031055] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:05.007 [2024-07-25 13:12:57.031084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:05.007 [2024-07-25 13:12:57.031099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:05.007 [2024-07-25 13:12:57.031152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:05.007 [2024-07-25 13:12:57.031167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:05.007 [2024-07-25 13:12:57.031181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:05.007 [2024-07-25 13:12:57.031193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:05.007 [2024-07-25 13:12:57.031207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:05.007 [2024-07-25 13:12:57.031219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:05.007 [2024-07-25 13:12:57.031242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:05.007 [2024-07-25 13:12:57.031254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:05.007 [2024-07-25 13:12:57.031268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.031997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.032011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.032023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.032036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.032048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.032061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.032073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.032091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.032102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.032116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.032139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.032154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.032166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.032180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.032191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.032205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.032217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.032230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.032242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.032256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.032267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.032281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.032292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.032307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.032319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.032333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.032366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.032386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.032399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.032412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.032424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.032440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.032453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.032468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:05.008 [2024-07-25 13:12:57.032480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:05.009 [2024-07-25 13:12:57.032505] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:05.009 [2024-07-25 13:12:57.032518] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 563b33f0-f637-4c53-bf97-a9f399c93bd7 00:17:05.009 [2024-07-25 13:12:57.032532] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:05.009 [2024-07-25 13:12:57.032546] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:05.009 [2024-07-25 13:12:57.032561] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:05.009 [2024-07-25 13:12:57.032573] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:05.009 [2024-07-25 13:12:57.032586] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:05.009 [2024-07-25 13:12:57.032597] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:05.009 [2024-07-25 13:12:57.032610] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:05.009 [2024-07-25 13:12:57.032620] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:05.009 [2024-07-25 13:12:57.032632] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:05.009 [2024-07-25 13:12:57.032643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.009 [2024-07-25 13:12:57.032657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:05.009 [2024-07-25 13:12:57.032670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.590 ms 00:17:05.009 [2024-07-25 13:12:57.032683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.009 [2024-07-25 13:12:57.049630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.009 [2024-07-25 13:12:57.049707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:05.009 [2024-07-25 13:12:57.049728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.851 ms 00:17:05.009 [2024-07-25 13:12:57.049742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.009 [2024-07-25 13:12:57.050222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.009 [2024-07-25 13:12:57.050253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:05.009 [2024-07-25 13:12:57.050268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:17:05.009 [2024-07-25 13:12:57.050282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.009 [2024-07-25 13:12:57.108771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.009 [2024-07-25 13:12:57.108844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:05.009 [2024-07-25 13:12:57.108865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.009 [2024-07-25 13:12:57.108879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.009 [2024-07-25 13:12:57.108970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.009 [2024-07-25 13:12:57.108989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:05.009 [2024-07-25 13:12:57.109001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.009 [2024-07-25 13:12:57.109024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.009 [2024-07-25 13:12:57.109218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.009 [2024-07-25 13:12:57.109246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:05.009 [2024-07-25 13:12:57.109259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.009 [2024-07-25 13:12:57.109274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.009 [2024-07-25 13:12:57.109310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.009 [2024-07-25 13:12:57.109329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:05.009 [2024-07-25 13:12:57.109341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.009 [2024-07-25 13:12:57.109354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.268 [2024-07-25 13:12:57.215412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.268 [2024-07-25 13:12:57.215488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:05.268 [2024-07-25 13:12:57.215509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.268 [2024-07-25 13:12:57.215523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.268 [2024-07-25 13:12:57.300998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.268 [2024-07-25 13:12:57.301093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:05.268 [2024-07-25 13:12:57.301150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.268 [2024-07-25 13:12:57.301168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.268 [2024-07-25 13:12:57.301323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.268 [2024-07-25 13:12:57.301351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:05.268 [2024-07-25 13:12:57.301365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.268 [2024-07-25 13:12:57.301379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.268 [2024-07-25 13:12:57.301465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.268 [2024-07-25 13:12:57.301491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:05.268 [2024-07-25 13:12:57.301504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.268 [2024-07-25 13:12:57.301517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.268 [2024-07-25 13:12:57.301664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.268 [2024-07-25 13:12:57.301693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:05.268 [2024-07-25 13:12:57.301706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.268 [2024-07-25 13:12:57.301719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.268 [2024-07-25 13:12:57.301785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.268 [2024-07-25 13:12:57.301808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:05.268 [2024-07-25 13:12:57.301821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.268 [2024-07-25 13:12:57.301834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.268 [2024-07-25 13:12:57.301889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.268 [2024-07-25 13:12:57.301907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:05.268 [2024-07-25 13:12:57.301921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.268 [2024-07-25 13:12:57.301934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.268 [2024-07-25 13:12:57.301993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.268 [2024-07-25 13:12:57.302015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:05.268 [2024-07-25 13:12:57.302028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.268 [2024-07-25 13:12:57.302042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.268 [2024-07-25 13:12:57.302253] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 466.883 ms, result 0 00:17:05.268 true 00:17:05.268 13:12:57 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 77915 00:17:05.268 13:12:57 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 77915 ']' 00:17:05.268 13:12:57 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 77915 00:17:05.268 13:12:57 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:17:05.268 13:12:57 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:05.268 13:12:57 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77915 00:17:05.268 killing process with pid 77915 00:17:05.268 13:12:57 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:05.268 13:12:57 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:05.268 13:12:57 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77915' 00:17:05.268 13:12:57 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 77915 00:17:05.268 13:12:57 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 77915 00:17:10.534 13:13:01 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:17:10.534 13:13:01 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:10.534 13:13:01 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:17:10.534 13:13:01 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:10.534 13:13:01 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:10.534 13:13:01 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:10.534 13:13:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:10.534 13:13:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:10.534 13:13:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:10.534 13:13:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:10.534 13:13:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:10.534 13:13:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:17:10.534 13:13:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:10.534 13:13:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:10.534 13:13:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:10.534 13:13:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:17:10.534 13:13:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:10.534 13:13:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:10.534 13:13:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:10.534 13:13:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:17:10.534 13:13:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:10.534 13:13:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:10.534 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:17:10.534 fio-3.35 00:17:10.534 Starting 1 thread 00:17:15.800 00:17:15.800 test: (groupid=0, jobs=1): err= 0: pid=78131: Thu Jul 25 13:13:07 2024 00:17:15.800 read: IOPS=999, BW=66.4MiB/s (69.6MB/s)(255MiB/3834msec) 00:17:15.800 slat (nsec): min=5773, max=36568, avg=7606.26, stdev=3158.76 00:17:15.800 clat (usec): min=319, max=1673, avg=446.17, stdev=57.24 00:17:15.801 lat (usec): min=327, max=1681, avg=453.77, stdev=57.81 00:17:15.801 clat percentiles (usec): 00:17:15.801 | 1.00th=[ 355], 5.00th=[ 371], 10.00th=[ 379], 20.00th=[ 392], 00:17:15.801 | 30.00th=[ 429], 40.00th=[ 445], 50.00th=[ 449], 60.00th=[ 453], 00:17:15.801 | 70.00th=[ 461], 80.00th=[ 478], 90.00th=[ 523], 95.00th=[ 537], 00:17:15.801 | 99.00th=[ 594], 99.50th=[ 619], 99.90th=[ 799], 99.95th=[ 996], 00:17:15.801 | 99.99th=[ 1680] 00:17:15.801 write: IOPS=1006, BW=66.9MiB/s (70.1MB/s)(256MiB/3830msec); 0 zone resets 00:17:15.801 slat (nsec): min=20071, max=95720, avg=24317.47, stdev=5271.52 00:17:15.801 clat (usec): min=358, max=815, avg=505.00, stdev=59.37 00:17:15.801 lat (usec): min=386, max=844, avg=529.31, stdev=59.31 00:17:15.801 clat percentiles (usec): 00:17:15.801 | 1.00th=[ 392], 5.00th=[ 408], 10.00th=[ 433], 20.00th=[ 469], 00:17:15.801 | 30.00th=[ 474], 40.00th=[ 482], 50.00th=[ 490], 60.00th=[ 510], 00:17:15.801 | 70.00th=[ 537], 80.00th=[ 545], 90.00th=[ 570], 95.00th=[ 611], 00:17:15.801 | 99.00th=[ 693], 99.50th=[ 734], 99.90th=[ 783], 99.95th=[ 816], 00:17:15.801 | 99.99th=[ 816] 00:17:15.801 bw ( KiB/s): min=67184, max=69496, per=99.98%, avg=68446.86, stdev=818.69, samples=7 00:17:15.801 iops : min= 988, max= 1022, avg=1006.57, stdev=12.04, samples=7 00:17:15.801 lat (usec) : 500=70.23%, 750=29.56%, 1000=0.20% 00:17:15.801 lat (msec) : 2=0.01% 00:17:15.801 cpu : usr=98.98%, sys=0.31%, ctx=7, majf=0, minf=1171 00:17:15.801 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:15.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.801 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.801 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:15.801 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:15.801 00:17:15.801 Run status group 0 (all jobs): 00:17:15.801 READ: bw=66.4MiB/s (69.6MB/s), 66.4MiB/s-66.4MiB/s (69.6MB/s-69.6MB/s), io=255MiB (267MB), run=3834-3834msec 00:17:15.801 WRITE: bw=66.9MiB/s (70.1MB/s), 66.9MiB/s-66.9MiB/s (70.1MB/s-70.1MB/s), io=256MiB (269MB), run=3830-3830msec 00:17:16.736 ----------------------------------------------------- 00:17:16.737 Suppressions used: 00:17:16.737 count bytes template 00:17:16.737 1 5 /usr/src/fio/parse.c 00:17:16.737 1 8 libtcmalloc_minimal.so 00:17:16.737 1 904 libcrypto.so 00:17:16.737 ----------------------------------------------------- 00:17:16.737 00:17:16.995 13:13:08 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:17:16.995 13:13:08 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:16.995 13:13:08 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:16.995 13:13:08 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:16.995 13:13:08 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:17:16.995 13:13:08 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:16.995 13:13:08 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:16.995 13:13:08 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:16.995 13:13:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:16.995 13:13:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:16.995 13:13:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:16.995 13:13:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:16.995 13:13:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:16.995 13:13:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:17:16.995 13:13:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:16.995 13:13:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:16.995 13:13:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:16.995 13:13:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:16.996 13:13:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:17:16.996 13:13:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:16.996 13:13:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:16.996 13:13:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:17:16.996 13:13:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:16.996 13:13:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:17.254 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:17.254 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:17.254 fio-3.35 00:17:17.254 Starting 2 threads 00:17:49.417 00:17:49.417 first_half: (groupid=0, jobs=1): err= 0: pid=78234: Thu Jul 25 13:13:41 2024 00:17:49.417 read: IOPS=2136, BW=8547KiB/s (8753kB/s)(255MiB/30563msec) 00:17:49.417 slat (nsec): min=4841, max=62130, avg=8161.78, stdev=2608.94 00:17:49.417 clat (usec): min=807, max=335118, avg=46301.10, stdev=22270.26 00:17:49.417 lat (usec): min=814, max=335125, avg=46309.26, stdev=22270.40 00:17:49.417 clat percentiles (msec): 00:17:49.417 | 1.00th=[ 11], 5.00th=[ 39], 10.00th=[ 39], 20.00th=[ 40], 00:17:49.417 | 30.00th=[ 40], 40.00th=[ 41], 50.00th=[ 41], 60.00th=[ 42], 00:17:49.417 | 70.00th=[ 45], 80.00th=[ 50], 90.00th=[ 54], 95.00th=[ 63], 00:17:49.417 | 99.00th=[ 176], 99.50th=[ 199], 99.90th=[ 259], 99.95th=[ 288], 00:17:49.417 | 99.99th=[ 326] 00:17:49.417 write: IOPS=2413, BW=9655KiB/s (9886kB/s)(256MiB/27152msec); 0 zone resets 00:17:49.417 slat (usec): min=6, max=3894, avg=10.22, stdev=19.07 00:17:49.417 clat (usec): min=491, max=120831, avg=13491.45, stdev=23512.63 00:17:49.417 lat (usec): min=510, max=120843, avg=13501.67, stdev=23513.15 00:17:49.417 clat percentiles (usec): 00:17:49.417 | 1.00th=[ 1057], 5.00th=[ 1369], 10.00th=[ 1631], 20.00th=[ 2573], 00:17:49.417 | 30.00th=[ 4228], 40.00th=[ 5669], 50.00th=[ 6718], 60.00th=[ 7504], 00:17:49.417 | 70.00th=[ 8848], 80.00th=[ 12387], 90.00th=[ 16712], 95.00th=[ 86508], 00:17:49.417 | 99.00th=[107480], 99.50th=[112722], 99.90th=[117965], 99.95th=[119014], 00:17:49.417 | 99.99th=[120062] 00:17:49.417 bw ( KiB/s): min= 384, max=40328, per=100.00%, avg=19418.07, stdev=11913.23, samples=27 00:17:49.417 iops : min= 96, max=10082, avg=4854.52, stdev=2978.31, samples=27 00:17:49.417 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.32% 00:17:49.417 lat (msec) : 2=7.68%, 4=6.36%, 10=23.58%, 20=8.38%, 50=41.46% 00:17:49.417 lat (msec) : 100=9.91%, 250=2.22%, 500=0.06% 00:17:49.417 cpu : usr=99.01%, sys=0.17%, ctx=237, majf=0, minf=5601 00:17:49.417 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:49.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:49.417 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:49.417 issued rwts: total=65309,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:49.417 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:49.417 second_half: (groupid=0, jobs=1): err= 0: pid=78235: Thu Jul 25 13:13:41 2024 00:17:49.417 read: IOPS=2121, BW=8488KiB/s (8691kB/s)(255MiB/30719msec) 00:17:49.417 slat (nsec): min=4762, max=73611, avg=7966.57, stdev=2362.71 00:17:49.417 clat (usec): min=854, max=340461, avg=46125.43, stdev=24339.33 00:17:49.417 lat (usec): min=866, max=340469, avg=46133.40, stdev=24339.67 00:17:49.417 clat percentiles (msec): 00:17:49.417 | 1.00th=[ 10], 5.00th=[ 39], 10.00th=[ 39], 20.00th=[ 40], 00:17:49.417 | 30.00th=[ 40], 40.00th=[ 41], 50.00th=[ 41], 60.00th=[ 42], 00:17:49.417 | 70.00th=[ 44], 80.00th=[ 48], 90.00th=[ 53], 95.00th=[ 64], 00:17:49.417 | 99.00th=[ 186], 99.50th=[ 207], 99.90th=[ 226], 99.95th=[ 241], 00:17:49.417 | 99.99th=[ 334] 00:17:49.417 write: IOPS=2713, BW=10.6MiB/s (11.1MB/s)(256MiB/24153msec); 0 zone resets 00:17:49.417 slat (usec): min=5, max=612, avg=10.07, stdev= 6.01 00:17:49.417 clat (usec): min=498, max=120754, avg=14093.22, stdev=24756.32 00:17:49.417 lat (usec): min=515, max=120765, avg=14103.29, stdev=24756.45 00:17:49.417 clat percentiles (usec): 00:17:49.417 | 1.00th=[ 1020], 5.00th=[ 1287], 10.00th=[ 1434], 20.00th=[ 1680], 00:17:49.417 | 30.00th=[ 1958], 40.00th=[ 2802], 50.00th=[ 4555], 60.00th=[ 6390], 00:17:49.417 | 70.00th=[ 9896], 80.00th=[ 14091], 90.00th=[ 46400], 95.00th=[ 86508], 00:17:49.417 | 99.00th=[106431], 99.50th=[111674], 99.90th=[117965], 99.95th=[119014], 00:17:49.417 | 99.99th=[120062] 00:17:49.417 bw ( KiB/s): min= 1072, max=47688, per=100.00%, avg=21845.33, stdev=11707.22, samples=24 00:17:49.417 iops : min= 268, max=11922, avg=5461.33, stdev=2926.80, samples=24 00:17:49.417 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.40% 00:17:49.417 lat (msec) : 2=15.29%, 4=7.57%, 10=12.34%, 20=9.67%, 50=42.42% 00:17:49.417 lat (msec) : 100=9.54%, 250=2.72%, 500=0.01% 00:17:49.417 cpu : usr=99.05%, sys=0.14%, ctx=161, majf=0, minf=5520 00:17:49.417 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:49.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:49.417 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:49.418 issued rwts: total=65182,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:49.418 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:49.418 00:17:49.418 Run status group 0 (all jobs): 00:17:49.418 READ: bw=16.6MiB/s (17.4MB/s), 8488KiB/s-8547KiB/s (8691kB/s-8753kB/s), io=510MiB (534MB), run=30563-30719msec 00:17:49.418 WRITE: bw=18.9MiB/s (19.8MB/s), 9655KiB/s-10.6MiB/s (9886kB/s-11.1MB/s), io=512MiB (537MB), run=24153-27152msec 00:17:51.947 ----------------------------------------------------- 00:17:51.947 Suppressions used: 00:17:51.947 count bytes template 00:17:51.947 2 10 /usr/src/fio/parse.c 00:17:51.947 4 384 /usr/src/fio/iolog.c 00:17:51.947 1 8 libtcmalloc_minimal.so 00:17:51.947 1 904 libcrypto.so 00:17:51.947 ----------------------------------------------------- 00:17:51.947 00:17:51.947 13:13:43 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:17:51.947 13:13:43 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:51.947 13:13:43 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:51.947 13:13:43 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:51.947 13:13:43 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:17:51.947 13:13:43 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:51.947 13:13:43 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:51.947 13:13:43 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:51.947 13:13:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:51.947 13:13:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:51.947 13:13:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:51.947 13:13:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:51.947 13:13:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:51.947 13:13:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:17:51.947 13:13:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:51.947 13:13:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:51.947 13:13:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:51.947 13:13:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:17:51.947 13:13:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:51.947 13:13:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:51.947 13:13:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:51.947 13:13:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:17:51.947 13:13:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:51.947 13:13:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:51.947 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:51.947 fio-3.35 00:17:51.947 Starting 1 thread 00:18:10.023 00:18:10.023 test: (groupid=0, jobs=1): err= 0: pid=78610: Thu Jul 25 13:14:01 2024 00:18:10.023 read: IOPS=6155, BW=24.0MiB/s (25.2MB/s)(255MiB/10593msec) 00:18:10.023 slat (nsec): min=4680, max=44690, avg=7329.74, stdev=2087.07 00:18:10.023 clat (usec): min=746, max=38651, avg=20782.68, stdev=1903.35 00:18:10.023 lat (usec): min=752, max=38659, avg=20790.01, stdev=1903.35 00:18:10.023 clat percentiles (usec): 00:18:10.023 | 1.00th=[19006], 5.00th=[19268], 10.00th=[19530], 20.00th=[19792], 00:18:10.023 | 30.00th=[19792], 40.00th=[20055], 50.00th=[20055], 60.00th=[20317], 00:18:10.023 | 70.00th=[20579], 80.00th=[21365], 90.00th=[23200], 95.00th=[25560], 00:18:10.023 | 99.00th=[26870], 99.50th=[27132], 99.90th=[28967], 99.95th=[33817], 00:18:10.023 | 99.99th=[38011] 00:18:10.023 write: IOPS=11.2k, BW=43.6MiB/s (45.8MB/s)(256MiB/5866msec); 0 zone resets 00:18:10.023 slat (usec): min=6, max=486, avg=10.29, stdev= 5.41 00:18:10.023 clat (usec): min=651, max=59358, avg=11394.06, stdev=14293.49 00:18:10.023 lat (usec): min=659, max=59367, avg=11404.35, stdev=14293.50 00:18:10.023 clat percentiles (usec): 00:18:10.023 | 1.00th=[ 947], 5.00th=[ 1156], 10.00th=[ 1270], 20.00th=[ 1467], 00:18:10.023 | 30.00th=[ 1713], 40.00th=[ 2245], 50.00th=[ 7373], 60.00th=[ 8717], 00:18:10.023 | 70.00th=[10159], 80.00th=[12387], 90.00th=[39584], 95.00th=[44827], 00:18:10.023 | 99.00th=[52691], 99.50th=[54264], 99.90th=[56886], 99.95th=[57410], 00:18:10.023 | 99.99th=[58459] 00:18:10.023 bw ( KiB/s): min=30432, max=61928, per=97.75%, avg=43683.33, stdev=9645.51, samples=12 00:18:10.023 iops : min= 7608, max=15482, avg=10920.83, stdev=2411.38, samples=12 00:18:10.023 lat (usec) : 750=0.02%, 1000=0.81% 00:18:10.023 lat (msec) : 2=17.84%, 4=2.24%, 10=13.75%, 20=27.99%, 50=36.19% 00:18:10.023 lat (msec) : 100=1.16% 00:18:10.023 cpu : usr=98.82%, sys=0.30%, ctx=33, majf=0, minf=5567 00:18:10.023 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:10.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.023 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:10.023 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:10.023 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:10.023 00:18:10.023 Run status group 0 (all jobs): 00:18:10.023 READ: bw=24.0MiB/s (25.2MB/s), 24.0MiB/s-24.0MiB/s (25.2MB/s-25.2MB/s), io=255MiB (267MB), run=10593-10593msec 00:18:10.023 WRITE: bw=43.6MiB/s (45.8MB/s), 43.6MiB/s-43.6MiB/s (45.8MB/s-45.8MB/s), io=256MiB (268MB), run=5866-5866msec 00:18:11.396 ----------------------------------------------------- 00:18:11.396 Suppressions used: 00:18:11.396 count bytes template 00:18:11.396 1 5 /usr/src/fio/parse.c 00:18:11.396 2 192 /usr/src/fio/iolog.c 00:18:11.396 1 8 libtcmalloc_minimal.so 00:18:11.396 1 904 libcrypto.so 00:18:11.396 ----------------------------------------------------- 00:18:11.396 00:18:11.396 13:14:03 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:18:11.396 13:14:03 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:11.396 13:14:03 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:11.396 13:14:03 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:11.396 Remove shared memory files 00:18:11.396 13:14:03 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:18:11.396 13:14:03 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:11.396 13:14:03 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:18:11.396 13:14:03 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:18:11.396 13:14:03 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid62048 /dev/shm/spdk_tgt_trace.pid76843 00:18:11.396 13:14:03 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:11.396 13:14:03 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:18:11.396 ************************************ 00:18:11.396 END TEST ftl_fio_basic 00:18:11.396 ************************************ 00:18:11.396 00:18:11.396 real 1m16.778s 00:18:11.396 user 2m53.147s 00:18:11.396 sys 0m3.844s 00:18:11.396 13:14:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:11.396 13:14:03 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:11.396 13:14:03 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:11.396 13:14:03 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:11.396 13:14:03 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:11.396 13:14:03 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:11.396 ************************************ 00:18:11.396 START TEST ftl_bdevperf 00:18:11.396 ************************************ 00:18:11.396 13:14:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:11.396 * Looking for test storage... 00:18:11.396 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:11.396 13:14:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:11.396 13:14:03 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:18:11.396 13:14:03 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:11.396 13:14:03 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:11.396 13:14:03 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:11.396 13:14:03 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:11.396 13:14:03 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:11.396 13:14:03 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:11.396 13:14:03 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:11.396 13:14:03 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:11.396 13:14:03 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:11.396 13:14:03 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:11.396 13:14:03 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:11.396 13:14:03 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:11.396 13:14:03 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:11.396 13:14:03 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:11.397 13:14:03 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:11.397 13:14:03 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:11.397 13:14:03 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:11.397 13:14:03 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:11.397 13:14:03 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:11.397 13:14:03 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:11.397 13:14:03 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:11.397 13:14:03 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:11.397 13:14:03 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:11.397 13:14:03 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:11.397 13:14:03 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:11.397 13:14:03 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:11.397 13:14:03 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:11.397 13:14:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:18:11.397 13:14:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:18:11.397 13:14:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:18:11.397 13:14:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:11.397 13:14:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:18:11.397 13:14:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:18:11.397 13:14:03 ftl.ftl_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:11.397 13:14:03 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:11.397 13:14:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@19 -- # bdevperf_pid=78865 00:18:11.397 13:14:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:18:11.397 13:14:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:18:11.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.397 13:14:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # waitforlisten 78865 00:18:11.397 13:14:03 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 78865 ']' 00:18:11.397 13:14:03 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.397 13:14:03 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:11.397 13:14:03 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.397 13:14:03 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:11.397 13:14:03 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:11.656 [2024-07-25 13:14:03.617062] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:11.656 [2024-07-25 13:14:03.617242] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78865 ] 00:18:11.656 [2024-07-25 13:14:03.779584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.921 [2024-07-25 13:14:03.966040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.486 13:14:04 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:12.486 13:14:04 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:18:12.486 13:14:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:12.486 13:14:04 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:18:12.486 13:14:04 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:12.486 13:14:04 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:18:12.486 13:14:04 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:18:12.486 13:14:04 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:12.806 13:14:04 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:12.806 13:14:04 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:18:12.806 13:14:04 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:12.806 13:14:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:18:12.806 13:14:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:12.806 13:14:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:18:12.806 13:14:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:18:12.806 13:14:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:13.062 13:14:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:13.062 { 00:18:13.062 "name": "nvme0n1", 00:18:13.062 "aliases": [ 00:18:13.062 "3cc3227b-92d7-4bdc-9d86-b25cf197f587" 00:18:13.062 ], 00:18:13.062 "product_name": "NVMe disk", 00:18:13.062 "block_size": 4096, 00:18:13.062 "num_blocks": 1310720, 00:18:13.062 "uuid": "3cc3227b-92d7-4bdc-9d86-b25cf197f587", 00:18:13.062 "assigned_rate_limits": { 00:18:13.062 "rw_ios_per_sec": 0, 00:18:13.062 "rw_mbytes_per_sec": 0, 00:18:13.062 "r_mbytes_per_sec": 0, 00:18:13.062 "w_mbytes_per_sec": 0 00:18:13.062 }, 00:18:13.062 "claimed": true, 00:18:13.062 "claim_type": "read_many_write_one", 00:18:13.062 "zoned": false, 00:18:13.062 "supported_io_types": { 00:18:13.062 "read": true, 00:18:13.062 "write": true, 00:18:13.062 "unmap": true, 00:18:13.062 "flush": true, 00:18:13.062 "reset": true, 00:18:13.062 "nvme_admin": true, 00:18:13.062 "nvme_io": true, 00:18:13.062 "nvme_io_md": false, 00:18:13.062 "write_zeroes": true, 00:18:13.062 "zcopy": false, 00:18:13.062 "get_zone_info": false, 00:18:13.062 "zone_management": false, 00:18:13.062 "zone_append": false, 00:18:13.062 "compare": true, 00:18:13.062 "compare_and_write": false, 00:18:13.062 "abort": true, 00:18:13.062 "seek_hole": false, 00:18:13.062 "seek_data": false, 00:18:13.062 "copy": true, 00:18:13.062 "nvme_iov_md": false 00:18:13.062 }, 00:18:13.062 "driver_specific": { 00:18:13.062 "nvme": [ 00:18:13.062 { 00:18:13.062 "pci_address": "0000:00:11.0", 00:18:13.062 "trid": { 00:18:13.062 "trtype": "PCIe", 00:18:13.062 "traddr": "0000:00:11.0" 00:18:13.062 }, 00:18:13.062 "ctrlr_data": { 00:18:13.062 "cntlid": 0, 00:18:13.062 "vendor_id": "0x1b36", 00:18:13.062 "model_number": "QEMU NVMe Ctrl", 00:18:13.062 "serial_number": "12341", 00:18:13.062 "firmware_revision": "8.0.0", 00:18:13.062 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:13.062 "oacs": { 00:18:13.062 "security": 0, 00:18:13.062 "format": 1, 00:18:13.062 "firmware": 0, 00:18:13.062 "ns_manage": 1 00:18:13.062 }, 00:18:13.062 "multi_ctrlr": false, 00:18:13.062 "ana_reporting": false 00:18:13.062 }, 00:18:13.062 "vs": { 00:18:13.062 "nvme_version": "1.4" 00:18:13.062 }, 00:18:13.062 "ns_data": { 00:18:13.062 "id": 1, 00:18:13.062 "can_share": false 00:18:13.062 } 00:18:13.062 } 00:18:13.062 ], 00:18:13.062 "mp_policy": "active_passive" 00:18:13.062 } 00:18:13.062 } 00:18:13.062 ]' 00:18:13.062 13:14:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:13.320 13:14:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:18:13.320 13:14:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:13.320 13:14:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:18:13.320 13:14:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:18:13.320 13:14:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:18:13.320 13:14:05 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:18:13.320 13:14:05 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:13.320 13:14:05 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:18:13.320 13:14:05 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:13.320 13:14:05 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:13.579 13:14:05 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=ead6d8ae-be6b-40b2-af99-39e83d3b2c78 00:18:13.579 13:14:05 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:18:13.579 13:14:05 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ead6d8ae-be6b-40b2-af99-39e83d3b2c78 00:18:13.837 13:14:05 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:14.096 13:14:06 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=3cefecd9-3527-4f8c-9fa4-d256b85c145e 00:18:14.096 13:14:06 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3cefecd9-3527-4f8c-9fa4-d256b85c145e 00:18:14.360 13:14:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # split_bdev=a3ec7f21-411b-4fa6-a903-855873a8b72a 00:18:14.360 13:14:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a3ec7f21-411b-4fa6-a903-855873a8b72a 00:18:14.360 13:14:06 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:18:14.360 13:14:06 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:14.360 13:14:06 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=a3ec7f21-411b-4fa6-a903-855873a8b72a 00:18:14.360 13:14:06 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:18:14.360 13:14:06 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size a3ec7f21-411b-4fa6-a903-855873a8b72a 00:18:14.360 13:14:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=a3ec7f21-411b-4fa6-a903-855873a8b72a 00:18:14.360 13:14:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:14.360 13:14:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:18:14.360 13:14:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:18:14.360 13:14:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a3ec7f21-411b-4fa6-a903-855873a8b72a 00:18:14.626 13:14:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:14.626 { 00:18:14.626 "name": "a3ec7f21-411b-4fa6-a903-855873a8b72a", 00:18:14.626 "aliases": [ 00:18:14.626 "lvs/nvme0n1p0" 00:18:14.626 ], 00:18:14.626 "product_name": "Logical Volume", 00:18:14.626 "block_size": 4096, 00:18:14.626 "num_blocks": 26476544, 00:18:14.626 "uuid": "a3ec7f21-411b-4fa6-a903-855873a8b72a", 00:18:14.626 "assigned_rate_limits": { 00:18:14.626 "rw_ios_per_sec": 0, 00:18:14.626 "rw_mbytes_per_sec": 0, 00:18:14.626 "r_mbytes_per_sec": 0, 00:18:14.626 "w_mbytes_per_sec": 0 00:18:14.626 }, 00:18:14.626 "claimed": false, 00:18:14.626 "zoned": false, 00:18:14.626 "supported_io_types": { 00:18:14.626 "read": true, 00:18:14.626 "write": true, 00:18:14.626 "unmap": true, 00:18:14.626 "flush": false, 00:18:14.627 "reset": true, 00:18:14.627 "nvme_admin": false, 00:18:14.627 "nvme_io": false, 00:18:14.627 "nvme_io_md": false, 00:18:14.627 "write_zeroes": true, 00:18:14.627 "zcopy": false, 00:18:14.627 "get_zone_info": false, 00:18:14.627 "zone_management": false, 00:18:14.627 "zone_append": false, 00:18:14.627 "compare": false, 00:18:14.627 "compare_and_write": false, 00:18:14.627 "abort": false, 00:18:14.627 "seek_hole": true, 00:18:14.627 "seek_data": true, 00:18:14.627 "copy": false, 00:18:14.627 "nvme_iov_md": false 00:18:14.627 }, 00:18:14.627 "driver_specific": { 00:18:14.627 "lvol": { 00:18:14.627 "lvol_store_uuid": "3cefecd9-3527-4f8c-9fa4-d256b85c145e", 00:18:14.627 "base_bdev": "nvme0n1", 00:18:14.627 "thin_provision": true, 00:18:14.627 "num_allocated_clusters": 0, 00:18:14.627 "snapshot": false, 00:18:14.627 "clone": false, 00:18:14.627 "esnap_clone": false 00:18:14.627 } 00:18:14.627 } 00:18:14.627 } 00:18:14.627 ]' 00:18:14.627 13:14:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:14.627 13:14:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:18:14.627 13:14:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:14.886 13:14:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:14.886 13:14:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:14.886 13:14:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:18:14.886 13:14:06 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:18:14.886 13:14:06 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:18:14.886 13:14:06 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:15.145 13:14:07 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:15.145 13:14:07 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:15.145 13:14:07 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size a3ec7f21-411b-4fa6-a903-855873a8b72a 00:18:15.145 13:14:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=a3ec7f21-411b-4fa6-a903-855873a8b72a 00:18:15.145 13:14:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:15.145 13:14:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:18:15.145 13:14:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:18:15.145 13:14:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a3ec7f21-411b-4fa6-a903-855873a8b72a 00:18:15.403 13:14:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:15.403 { 00:18:15.403 "name": "a3ec7f21-411b-4fa6-a903-855873a8b72a", 00:18:15.403 "aliases": [ 00:18:15.403 "lvs/nvme0n1p0" 00:18:15.403 ], 00:18:15.403 "product_name": "Logical Volume", 00:18:15.403 "block_size": 4096, 00:18:15.403 "num_blocks": 26476544, 00:18:15.403 "uuid": "a3ec7f21-411b-4fa6-a903-855873a8b72a", 00:18:15.403 "assigned_rate_limits": { 00:18:15.403 "rw_ios_per_sec": 0, 00:18:15.403 "rw_mbytes_per_sec": 0, 00:18:15.403 "r_mbytes_per_sec": 0, 00:18:15.403 "w_mbytes_per_sec": 0 00:18:15.403 }, 00:18:15.403 "claimed": false, 00:18:15.403 "zoned": false, 00:18:15.403 "supported_io_types": { 00:18:15.403 "read": true, 00:18:15.403 "write": true, 00:18:15.403 "unmap": true, 00:18:15.403 "flush": false, 00:18:15.403 "reset": true, 00:18:15.403 "nvme_admin": false, 00:18:15.403 "nvme_io": false, 00:18:15.403 "nvme_io_md": false, 00:18:15.403 "write_zeroes": true, 00:18:15.403 "zcopy": false, 00:18:15.403 "get_zone_info": false, 00:18:15.403 "zone_management": false, 00:18:15.403 "zone_append": false, 00:18:15.403 "compare": false, 00:18:15.403 "compare_and_write": false, 00:18:15.403 "abort": false, 00:18:15.403 "seek_hole": true, 00:18:15.403 "seek_data": true, 00:18:15.403 "copy": false, 00:18:15.403 "nvme_iov_md": false 00:18:15.403 }, 00:18:15.403 "driver_specific": { 00:18:15.403 "lvol": { 00:18:15.403 "lvol_store_uuid": "3cefecd9-3527-4f8c-9fa4-d256b85c145e", 00:18:15.403 "base_bdev": "nvme0n1", 00:18:15.403 "thin_provision": true, 00:18:15.403 "num_allocated_clusters": 0, 00:18:15.403 "snapshot": false, 00:18:15.403 "clone": false, 00:18:15.403 "esnap_clone": false 00:18:15.403 } 00:18:15.403 } 00:18:15.403 } 00:18:15.403 ]' 00:18:15.403 13:14:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:15.403 13:14:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:18:15.403 13:14:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:15.403 13:14:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:15.403 13:14:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:15.403 13:14:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:18:15.403 13:14:07 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:18:15.403 13:14:07 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:15.661 13:14:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:18:15.661 13:14:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # get_bdev_size a3ec7f21-411b-4fa6-a903-855873a8b72a 00:18:15.661 13:14:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=a3ec7f21-411b-4fa6-a903-855873a8b72a 00:18:15.661 13:14:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:15.661 13:14:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:18:15.661 13:14:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:18:15.661 13:14:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a3ec7f21-411b-4fa6-a903-855873a8b72a 00:18:15.919 13:14:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:15.919 { 00:18:15.919 "name": "a3ec7f21-411b-4fa6-a903-855873a8b72a", 00:18:15.919 "aliases": [ 00:18:15.919 "lvs/nvme0n1p0" 00:18:15.919 ], 00:18:15.919 "product_name": "Logical Volume", 00:18:15.919 "block_size": 4096, 00:18:15.919 "num_blocks": 26476544, 00:18:15.919 "uuid": "a3ec7f21-411b-4fa6-a903-855873a8b72a", 00:18:15.919 "assigned_rate_limits": { 00:18:15.919 "rw_ios_per_sec": 0, 00:18:15.919 "rw_mbytes_per_sec": 0, 00:18:15.919 "r_mbytes_per_sec": 0, 00:18:15.919 "w_mbytes_per_sec": 0 00:18:15.919 }, 00:18:15.920 "claimed": false, 00:18:15.920 "zoned": false, 00:18:15.920 "supported_io_types": { 00:18:15.920 "read": true, 00:18:15.920 "write": true, 00:18:15.920 "unmap": true, 00:18:15.920 "flush": false, 00:18:15.920 "reset": true, 00:18:15.920 "nvme_admin": false, 00:18:15.920 "nvme_io": false, 00:18:15.920 "nvme_io_md": false, 00:18:15.920 "write_zeroes": true, 00:18:15.920 "zcopy": false, 00:18:15.920 "get_zone_info": false, 00:18:15.920 "zone_management": false, 00:18:15.920 "zone_append": false, 00:18:15.920 "compare": false, 00:18:15.920 "compare_and_write": false, 00:18:15.920 "abort": false, 00:18:15.920 "seek_hole": true, 00:18:15.920 "seek_data": true, 00:18:15.920 "copy": false, 00:18:15.920 "nvme_iov_md": false 00:18:15.920 }, 00:18:15.920 "driver_specific": { 00:18:15.920 "lvol": { 00:18:15.920 "lvol_store_uuid": "3cefecd9-3527-4f8c-9fa4-d256b85c145e", 00:18:15.920 "base_bdev": "nvme0n1", 00:18:15.920 "thin_provision": true, 00:18:15.920 "num_allocated_clusters": 0, 00:18:15.920 "snapshot": false, 00:18:15.920 "clone": false, 00:18:15.920 "esnap_clone": false 00:18:15.920 } 00:18:15.920 } 00:18:15.920 } 00:18:15.920 ]' 00:18:15.920 13:14:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:15.920 13:14:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:18:15.920 13:14:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:16.178 13:14:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:16.178 13:14:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:16.178 13:14:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:18:16.178 13:14:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:18:16.178 13:14:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a3ec7f21-411b-4fa6-a903-855873a8b72a -c nvc0n1p0 --l2p_dram_limit 20 00:18:16.178 [2024-07-25 13:14:08.361672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.178 [2024-07-25 13:14:08.361743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:16.178 [2024-07-25 13:14:08.361770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:16.178 [2024-07-25 13:14:08.361799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.178 [2024-07-25 13:14:08.361896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.178 [2024-07-25 13:14:08.361915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:16.178 [2024-07-25 13:14:08.361932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:16.178 [2024-07-25 13:14:08.361943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.178 [2024-07-25 13:14:08.361973] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:16.178 [2024-07-25 13:14:08.362956] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:16.178 [2024-07-25 13:14:08.362998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.178 [2024-07-25 13:14:08.363012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:16.179 [2024-07-25 13:14:08.363027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.029 ms 00:18:16.179 [2024-07-25 13:14:08.363040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.179 [2024-07-25 13:14:08.363191] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 01701f03-af83-492e-ab4e-b0323b1a92e1 00:18:16.179 [2024-07-25 13:14:08.364209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.179 [2024-07-25 13:14:08.364256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:16.179 [2024-07-25 13:14:08.364276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:18:16.179 [2024-07-25 13:14:08.364291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.437 [2024-07-25 13:14:08.369230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.437 [2024-07-25 13:14:08.369290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:16.437 [2024-07-25 13:14:08.369309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.888 ms 00:18:16.437 [2024-07-25 13:14:08.369323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.437 [2024-07-25 13:14:08.369459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.437 [2024-07-25 13:14:08.369484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:16.437 [2024-07-25 13:14:08.369499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:18:16.437 [2024-07-25 13:14:08.369516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.437 [2024-07-25 13:14:08.369607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.437 [2024-07-25 13:14:08.369630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:16.437 [2024-07-25 13:14:08.369644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:16.437 [2024-07-25 13:14:08.369657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.437 [2024-07-25 13:14:08.369689] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:16.437 [2024-07-25 13:14:08.374353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.437 [2024-07-25 13:14:08.374397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:16.437 [2024-07-25 13:14:08.374418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.669 ms 00:18:16.437 [2024-07-25 13:14:08.374431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.437 [2024-07-25 13:14:08.374481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.437 [2024-07-25 13:14:08.374498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:16.437 [2024-07-25 13:14:08.374512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:16.437 [2024-07-25 13:14:08.374524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.437 [2024-07-25 13:14:08.374587] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:16.437 [2024-07-25 13:14:08.374752] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:16.437 [2024-07-25 13:14:08.374775] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:16.437 [2024-07-25 13:14:08.374792] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:16.437 [2024-07-25 13:14:08.374810] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:16.437 [2024-07-25 13:14:08.374824] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:16.437 [2024-07-25 13:14:08.374842] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:16.437 [2024-07-25 13:14:08.374854] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:16.437 [2024-07-25 13:14:08.374868] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:16.437 [2024-07-25 13:14:08.374880] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:16.437 [2024-07-25 13:14:08.374894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.438 [2024-07-25 13:14:08.374906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:16.438 [2024-07-25 13:14:08.374924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:18:16.438 [2024-07-25 13:14:08.374935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.438 [2024-07-25 13:14:08.375030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.438 [2024-07-25 13:14:08.375046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:16.438 [2024-07-25 13:14:08.375061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:16.438 [2024-07-25 13:14:08.375072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.438 [2024-07-25 13:14:08.375210] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:16.438 [2024-07-25 13:14:08.375232] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:16.438 [2024-07-25 13:14:08.375247] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:16.438 [2024-07-25 13:14:08.375263] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:16.438 [2024-07-25 13:14:08.375278] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:16.438 [2024-07-25 13:14:08.375289] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:16.438 [2024-07-25 13:14:08.375303] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:16.438 [2024-07-25 13:14:08.375314] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:16.438 [2024-07-25 13:14:08.375327] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:16.438 [2024-07-25 13:14:08.375338] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:16.438 [2024-07-25 13:14:08.375350] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:16.438 [2024-07-25 13:14:08.375361] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:16.438 [2024-07-25 13:14:08.375376] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:16.438 [2024-07-25 13:14:08.375387] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:16.438 [2024-07-25 13:14:08.375400] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:16.438 [2024-07-25 13:14:08.375411] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:16.438 [2024-07-25 13:14:08.375426] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:16.438 [2024-07-25 13:14:08.375437] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:16.438 [2024-07-25 13:14:08.375464] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:16.438 [2024-07-25 13:14:08.375475] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:16.438 [2024-07-25 13:14:08.375488] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:16.438 [2024-07-25 13:14:08.375499] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:16.438 [2024-07-25 13:14:08.375512] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:16.438 [2024-07-25 13:14:08.375523] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:16.438 [2024-07-25 13:14:08.375535] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:16.438 [2024-07-25 13:14:08.375546] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:16.438 [2024-07-25 13:14:08.375559] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:16.438 [2024-07-25 13:14:08.375570] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:16.438 [2024-07-25 13:14:08.375582] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:16.438 [2024-07-25 13:14:08.375593] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:16.438 [2024-07-25 13:14:08.375606] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:16.438 [2024-07-25 13:14:08.375619] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:16.438 [2024-07-25 13:14:08.375635] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:16.438 [2024-07-25 13:14:08.375646] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:16.438 [2024-07-25 13:14:08.375659] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:16.438 [2024-07-25 13:14:08.375671] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:16.438 [2024-07-25 13:14:08.375685] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:16.438 [2024-07-25 13:14:08.375697] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:16.438 [2024-07-25 13:14:08.375710] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:16.438 [2024-07-25 13:14:08.375721] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:16.438 [2024-07-25 13:14:08.375733] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:16.438 [2024-07-25 13:14:08.375744] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:16.438 [2024-07-25 13:14:08.375757] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:16.438 [2024-07-25 13:14:08.375768] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:16.438 [2024-07-25 13:14:08.375782] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:16.438 [2024-07-25 13:14:08.375794] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:16.438 [2024-07-25 13:14:08.375807] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:16.438 [2024-07-25 13:14:08.375823] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:16.438 [2024-07-25 13:14:08.375838] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:16.438 [2024-07-25 13:14:08.375849] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:16.438 [2024-07-25 13:14:08.375863] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:16.438 [2024-07-25 13:14:08.375874] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:16.438 [2024-07-25 13:14:08.375887] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:16.438 [2024-07-25 13:14:08.375904] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:16.438 [2024-07-25 13:14:08.375921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:16.438 [2024-07-25 13:14:08.375936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:16.438 [2024-07-25 13:14:08.375950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:16.438 [2024-07-25 13:14:08.375962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:16.438 [2024-07-25 13:14:08.375975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:16.438 [2024-07-25 13:14:08.375987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:16.438 [2024-07-25 13:14:08.376000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:16.438 [2024-07-25 13:14:08.376012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:16.438 [2024-07-25 13:14:08.376027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:16.438 [2024-07-25 13:14:08.376041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:16.438 [2024-07-25 13:14:08.376057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:16.438 [2024-07-25 13:14:08.376069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:16.438 [2024-07-25 13:14:08.376083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:16.438 [2024-07-25 13:14:08.376094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:16.438 [2024-07-25 13:14:08.376123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:16.438 [2024-07-25 13:14:08.376137] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:16.438 [2024-07-25 13:14:08.376152] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:16.438 [2024-07-25 13:14:08.376165] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:16.438 [2024-07-25 13:14:08.376179] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:16.438 [2024-07-25 13:14:08.376191] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:16.438 [2024-07-25 13:14:08.376205] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:16.438 [2024-07-25 13:14:08.376219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.438 [2024-07-25 13:14:08.376245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:16.438 [2024-07-25 13:14:08.376260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.090 ms 00:18:16.438 [2024-07-25 13:14:08.376274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.438 [2024-07-25 13:14:08.376322] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:16.438 [2024-07-25 13:14:08.376352] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:18.337 [2024-07-25 13:14:10.439318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.337 [2024-07-25 13:14:10.439649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:18.337 [2024-07-25 13:14:10.439791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2063.008 ms 00:18:18.337 [2024-07-25 13:14:10.439925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.337 [2024-07-25 13:14:10.487297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.337 [2024-07-25 13:14:10.487645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:18.337 [2024-07-25 13:14:10.487830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.025 ms 00:18:18.337 [2024-07-25 13:14:10.487907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.337 [2024-07-25 13:14:10.488279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.337 [2024-07-25 13:14:10.488479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:18.337 [2024-07-25 13:14:10.488650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:18:18.337 [2024-07-25 13:14:10.488735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.596 [2024-07-25 13:14:10.528579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.596 [2024-07-25 13:14:10.528808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:18.596 [2024-07-25 13:14:10.528945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.641 ms 00:18:18.596 [2024-07-25 13:14:10.529016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.596 [2024-07-25 13:14:10.529129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.596 [2024-07-25 13:14:10.529253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:18.596 [2024-07-25 13:14:10.529316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:18.596 [2024-07-25 13:14:10.529359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.596 [2024-07-25 13:14:10.529847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.596 [2024-07-25 13:14:10.529998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:18.596 [2024-07-25 13:14:10.530126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:18:18.596 [2024-07-25 13:14:10.530256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.596 [2024-07-25 13:14:10.530448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.596 [2024-07-25 13:14:10.530508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:18.596 [2024-07-25 13:14:10.530611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:18:18.596 [2024-07-25 13:14:10.530734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.596 [2024-07-25 13:14:10.547022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.596 [2024-07-25 13:14:10.547228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:18.596 [2024-07-25 13:14:10.547358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.214 ms 00:18:18.596 [2024-07-25 13:14:10.547494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.596 [2024-07-25 13:14:10.561212] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:18:18.596 [2024-07-25 13:14:10.566344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.596 [2024-07-25 13:14:10.566382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:18.596 [2024-07-25 13:14:10.566404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.670 ms 00:18:18.596 [2024-07-25 13:14:10.566418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.596 [2024-07-25 13:14:10.626983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.596 [2024-07-25 13:14:10.627058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:18.596 [2024-07-25 13:14:10.627082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.506 ms 00:18:18.596 [2024-07-25 13:14:10.627095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.596 [2024-07-25 13:14:10.627340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.596 [2024-07-25 13:14:10.627362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:18.596 [2024-07-25 13:14:10.627381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:18:18.596 [2024-07-25 13:14:10.627397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.596 [2024-07-25 13:14:10.658823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.596 [2024-07-25 13:14:10.658873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:18.596 [2024-07-25 13:14:10.658896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.331 ms 00:18:18.596 [2024-07-25 13:14:10.658909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.596 [2024-07-25 13:14:10.689697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.596 [2024-07-25 13:14:10.689764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:18.596 [2024-07-25 13:14:10.689790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.729 ms 00:18:18.596 [2024-07-25 13:14:10.689803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.596 [2024-07-25 13:14:10.690559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.596 [2024-07-25 13:14:10.690586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:18.596 [2024-07-25 13:14:10.690603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.703 ms 00:18:18.596 [2024-07-25 13:14:10.690615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.596 [2024-07-25 13:14:10.777356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.596 [2024-07-25 13:14:10.777430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:18.596 [2024-07-25 13:14:10.777461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.659 ms 00:18:18.596 [2024-07-25 13:14:10.777475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.853 [2024-07-25 13:14:10.809652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.853 [2024-07-25 13:14:10.809716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:18.853 [2024-07-25 13:14:10.809746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.115 ms 00:18:18.853 [2024-07-25 13:14:10.809759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.853 [2024-07-25 13:14:10.841146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.853 [2024-07-25 13:14:10.841208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:18.853 [2024-07-25 13:14:10.841231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.321 ms 00:18:18.853 [2024-07-25 13:14:10.841244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.853 [2024-07-25 13:14:10.872768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.853 [2024-07-25 13:14:10.872823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:18.853 [2024-07-25 13:14:10.872847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.463 ms 00:18:18.853 [2024-07-25 13:14:10.872860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.853 [2024-07-25 13:14:10.872919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.853 [2024-07-25 13:14:10.872939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:18.853 [2024-07-25 13:14:10.872959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:18.853 [2024-07-25 13:14:10.872971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.853 [2024-07-25 13:14:10.873126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.853 [2024-07-25 13:14:10.873151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:18.853 [2024-07-25 13:14:10.873172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:18:18.853 [2024-07-25 13:14:10.873184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.853 [2024-07-25 13:14:10.874315] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2512.124 ms, result 0 00:18:18.853 { 00:18:18.853 "name": "ftl0", 00:18:18.853 "uuid": "01701f03-af83-492e-ab4e-b0323b1a92e1" 00:18:18.853 } 00:18:18.853 13:14:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:18:18.853 13:14:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # jq -r .name 00:18:18.853 13:14:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:18:19.111 13:14:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:18:19.369 [2024-07-25 13:14:11.310818] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:19.369 I/O size of 69632 is greater than zero copy threshold (65536). 00:18:19.369 Zero copy mechanism will not be used. 00:18:19.369 Running I/O for 4 seconds... 00:18:23.551 00:18:23.551 Latency(us) 00:18:23.551 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.551 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:18:23.551 ftl0 : 4.00 1940.89 128.89 0.00 0.00 538.56 220.63 960.70 00:18:23.551 =================================================================================================================== 00:18:23.551 Total : 1940.89 128.89 0.00 0.00 538.56 220.63 960.70 00:18:23.551 [2024-07-25 13:14:15.321263] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft0 00:18:23.551 l0 00:18:23.551 13:14:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:18:23.551 [2024-07-25 13:14:15.460404] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:23.551 Running I/O for 4 seconds... 00:18:27.734 00:18:27.734 Latency(us) 00:18:27.734 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.734 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:18:27.734 ftl0 : 4.02 6931.01 27.07 0.00 0.00 18411.91 409.60 36700.16 00:18:27.734 =================================================================================================================== 00:18:27.734 Total : 6931.01 27.07 0.00 0.00 18411.91 0.00 36700.16 00:18:27.734 [2024-07-25 13:14:19.493519] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft0 00:18:27.734 l0 00:18:27.734 13:14:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:18:27.734 [2024-07-25 13:14:19.631359] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:27.734 Running I/O for 4 seconds... 00:18:31.911 00:18:31.911 Latency(us) 00:18:31.911 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.911 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:31.911 Verification LBA range: start 0x0 length 0x1400000 00:18:31.911 ftl0 : 4.02 5518.52 21.56 0.00 0.00 23103.09 377.95 32410.53 00:18:31.911 =================================================================================================================== 00:18:31.911 Total : 5518.52 21.56 0.00 0.00 23103.09 0.00 32410.53 00:18:31.911 [2024-07-25 13:14:23.668010] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:31.911 0 00:18:31.911 13:14:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:18:31.911 [2024-07-25 13:14:23.976209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.911 [2024-07-25 13:14:23.976270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:31.911 [2024-07-25 13:14:23.976300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:31.911 [2024-07-25 13:14:23.976314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.911 [2024-07-25 13:14:23.976351] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:31.911 [2024-07-25 13:14:23.979692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.911 [2024-07-25 13:14:23.979734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:31.911 [2024-07-25 13:14:23.979751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.316 ms 00:18:31.911 [2024-07-25 13:14:23.979765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.911 [2024-07-25 13:14:23.981307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.911 [2024-07-25 13:14:23.981359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:31.911 [2024-07-25 13:14:23.981379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.511 ms 00:18:31.911 [2024-07-25 13:14:23.981393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.168 [2024-07-25 13:14:24.178534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.168 [2024-07-25 13:14:24.178639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:32.168 [2024-07-25 13:14:24.178664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 197.111 ms 00:18:32.168 [2024-07-25 13:14:24.178683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.168 [2024-07-25 13:14:24.185649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.168 [2024-07-25 13:14:24.185692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:32.168 [2024-07-25 13:14:24.185709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.916 ms 00:18:32.168 [2024-07-25 13:14:24.185734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.168 [2024-07-25 13:14:24.217708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.168 [2024-07-25 13:14:24.217768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:32.168 [2024-07-25 13:14:24.217789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.881 ms 00:18:32.168 [2024-07-25 13:14:24.217804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.168 [2024-07-25 13:14:24.236383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.168 [2024-07-25 13:14:24.236456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:32.168 [2024-07-25 13:14:24.236482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.524 ms 00:18:32.168 [2024-07-25 13:14:24.236498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.168 [2024-07-25 13:14:24.236711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.168 [2024-07-25 13:14:24.236740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:32.168 [2024-07-25 13:14:24.236754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:18:32.168 [2024-07-25 13:14:24.236773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.168 [2024-07-25 13:14:24.268770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.168 [2024-07-25 13:14:24.268845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:32.168 [2024-07-25 13:14:24.268867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.972 ms 00:18:32.168 [2024-07-25 13:14:24.268881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.168 [2024-07-25 13:14:24.300064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.168 [2024-07-25 13:14:24.300152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:32.168 [2024-07-25 13:14:24.300174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.116 ms 00:18:32.168 [2024-07-25 13:14:24.300189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.168 [2024-07-25 13:14:24.331245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.168 [2024-07-25 13:14:24.331333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:32.168 [2024-07-25 13:14:24.331355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.996 ms 00:18:32.168 [2024-07-25 13:14:24.331370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.426 [2024-07-25 13:14:24.362688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.426 [2024-07-25 13:14:24.362748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:32.427 [2024-07-25 13:14:24.362768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.157 ms 00:18:32.427 [2024-07-25 13:14:24.362786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.427 [2024-07-25 13:14:24.362839] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:32.427 [2024-07-25 13:14:24.362870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.362885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.362899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.362911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.362925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.362937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.362951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.362962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.362976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.362988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.363996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.364026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.364038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.364052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.364065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.364079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:32.427 [2024-07-25 13:14:24.364092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:32.428 [2024-07-25 13:14:24.364106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:32.428 [2024-07-25 13:14:24.364119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:32.428 [2024-07-25 13:14:24.364157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:32.428 [2024-07-25 13:14:24.364174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:32.428 [2024-07-25 13:14:24.364190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:32.428 [2024-07-25 13:14:24.364202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:32.428 [2024-07-25 13:14:24.364217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:32.428 [2024-07-25 13:14:24.364233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:32.428 [2024-07-25 13:14:24.364260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:32.428 [2024-07-25 13:14:24.364273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:32.428 [2024-07-25 13:14:24.364287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:32.428 [2024-07-25 13:14:24.364299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:32.428 [2024-07-25 13:14:24.364315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:32.428 [2024-07-25 13:14:24.364328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:32.428 [2024-07-25 13:14:24.364353] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:32.428 [2024-07-25 13:14:24.364365] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 01701f03-af83-492e-ab4e-b0323b1a92e1 00:18:32.428 [2024-07-25 13:14:24.364380] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:32.428 [2024-07-25 13:14:24.364391] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:32.428 [2024-07-25 13:14:24.364407] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:32.428 [2024-07-25 13:14:24.364420] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:32.428 [2024-07-25 13:14:24.364432] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:32.428 [2024-07-25 13:14:24.364444] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:32.428 [2024-07-25 13:14:24.364457] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:32.428 [2024-07-25 13:14:24.364468] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:32.428 [2024-07-25 13:14:24.364483] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:32.428 [2024-07-25 13:14:24.364495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.428 [2024-07-25 13:14:24.364509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:32.428 [2024-07-25 13:14:24.364522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.658 ms 00:18:32.428 [2024-07-25 13:14:24.364535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.428 [2024-07-25 13:14:24.381320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.428 [2024-07-25 13:14:24.381370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:32.428 [2024-07-25 13:14:24.381404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.692 ms 00:18:32.428 [2024-07-25 13:14:24.381418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.428 [2024-07-25 13:14:24.381882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.428 [2024-07-25 13:14:24.381918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:32.428 [2024-07-25 13:14:24.381934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:18:32.428 [2024-07-25 13:14:24.381954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.428 [2024-07-25 13:14:24.422328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.428 [2024-07-25 13:14:24.422399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:32.428 [2024-07-25 13:14:24.422419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.428 [2024-07-25 13:14:24.422437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.428 [2024-07-25 13:14:24.422519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.428 [2024-07-25 13:14:24.422537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:32.428 [2024-07-25 13:14:24.422550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.428 [2024-07-25 13:14:24.422564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.428 [2024-07-25 13:14:24.422694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.428 [2024-07-25 13:14:24.422739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:32.428 [2024-07-25 13:14:24.422753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.428 [2024-07-25 13:14:24.422766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.428 [2024-07-25 13:14:24.422792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.428 [2024-07-25 13:14:24.422808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:32.428 [2024-07-25 13:14:24.422820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.428 [2024-07-25 13:14:24.422833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.428 [2024-07-25 13:14:24.522040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.428 [2024-07-25 13:14:24.522147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:32.428 [2024-07-25 13:14:24.522180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.428 [2024-07-25 13:14:24.522207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.428 [2024-07-25 13:14:24.606323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.428 [2024-07-25 13:14:24.606400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:32.428 [2024-07-25 13:14:24.606423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.428 [2024-07-25 13:14:24.606437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.428 [2024-07-25 13:14:24.606582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.428 [2024-07-25 13:14:24.606605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:32.428 [2024-07-25 13:14:24.606622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.428 [2024-07-25 13:14:24.606636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.428 [2024-07-25 13:14:24.606698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.428 [2024-07-25 13:14:24.606720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:32.428 [2024-07-25 13:14:24.606734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.428 [2024-07-25 13:14:24.606747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.428 [2024-07-25 13:14:24.606868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.428 [2024-07-25 13:14:24.606894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:32.428 [2024-07-25 13:14:24.606911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.428 [2024-07-25 13:14:24.606927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.428 [2024-07-25 13:14:24.606979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.428 [2024-07-25 13:14:24.607002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:32.428 [2024-07-25 13:14:24.607015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.428 [2024-07-25 13:14:24.607029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.428 [2024-07-25 13:14:24.607075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.428 [2024-07-25 13:14:24.607094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:32.428 [2024-07-25 13:14:24.607133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.428 [2024-07-25 13:14:24.607155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.428 [2024-07-25 13:14:24.607214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.428 [2024-07-25 13:14:24.607237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:32.428 [2024-07-25 13:14:24.607250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.428 [2024-07-25 13:14:24.607264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.428 [2024-07-25 13:14:24.607423] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 631.174 ms, result 0 00:18:32.428 true 00:18:32.686 13:14:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # killprocess 78865 00:18:32.686 13:14:24 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 78865 ']' 00:18:32.686 13:14:24 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 78865 00:18:32.686 13:14:24 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:18:32.686 13:14:24 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:32.686 13:14:24 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78865 00:18:32.686 killing process with pid 78865 00:18:32.686 Received shutdown signal, test time was about 4.000000 seconds 00:18:32.686 00:18:32.686 Latency(us) 00:18:32.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.686 =================================================================================================================== 00:18:32.686 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:32.686 13:14:24 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:32.686 13:14:24 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:32.686 13:14:24 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78865' 00:18:32.686 13:14:24 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 78865 00:18:32.686 13:14:24 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 78865 00:18:36.000 13:14:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:18:36.000 13:14:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:18:36.000 13:14:27 ftl.ftl_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:36.000 13:14:27 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:36.000 Remove shared memory files 00:18:36.000 13:14:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@41 -- # remove_shm 00:18:36.000 13:14:27 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:36.000 13:14:27 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:18:36.000 13:14:27 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:18:36.000 13:14:27 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:18:36.000 13:14:27 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:36.000 13:14:27 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:18:36.000 ************************************ 00:18:36.000 END TEST ftl_bdevperf 00:18:36.000 ************************************ 00:18:36.000 00:18:36.000 real 0m24.543s 00:18:36.000 user 0m28.335s 00:18:36.000 sys 0m1.095s 00:18:36.000 13:14:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:36.000 13:14:27 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:36.000 13:14:27 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:36.000 13:14:27 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:36.000 13:14:27 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:36.000 13:14:27 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:36.000 ************************************ 00:18:36.000 START TEST ftl_trim 00:18:36.000 ************************************ 00:18:36.000 13:14:28 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:36.000 * Looking for test storage... 00:18:36.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=79224 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 79224 00:18:36.000 13:14:28 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:18:36.000 13:14:28 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 79224 ']' 00:18:36.000 13:14:28 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.000 13:14:28 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:36.000 13:14:28 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.000 13:14:28 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:36.000 13:14:28 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:36.258 [2024-07-25 13:14:28.219980] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:36.258 [2024-07-25 13:14:28.220352] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79224 ] 00:18:36.258 [2024-07-25 13:14:28.386481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:36.516 [2024-07-25 13:14:28.582310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:36.516 [2024-07-25 13:14:28.582388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.516 [2024-07-25 13:14:28.582404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:37.131 13:14:29 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:37.131 13:14:29 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:18:37.131 13:14:29 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:37.131 13:14:29 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:18:37.131 13:14:29 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:37.131 13:14:29 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:18:37.131 13:14:29 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:18:37.131 13:14:29 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:37.694 13:14:29 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:37.694 13:14:29 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:18:37.694 13:14:29 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:37.694 13:14:29 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:18:37.694 13:14:29 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:37.694 13:14:29 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:37.694 13:14:29 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:37.694 13:14:29 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:37.952 13:14:29 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:37.952 { 00:18:37.952 "name": "nvme0n1", 00:18:37.952 "aliases": [ 00:18:37.952 "b7ea08ad-42bf-448c-83ba-3715cef2967c" 00:18:37.952 ], 00:18:37.952 "product_name": "NVMe disk", 00:18:37.952 "block_size": 4096, 00:18:37.952 "num_blocks": 1310720, 00:18:37.952 "uuid": "b7ea08ad-42bf-448c-83ba-3715cef2967c", 00:18:37.952 "assigned_rate_limits": { 00:18:37.952 "rw_ios_per_sec": 0, 00:18:37.952 "rw_mbytes_per_sec": 0, 00:18:37.952 "r_mbytes_per_sec": 0, 00:18:37.952 "w_mbytes_per_sec": 0 00:18:37.952 }, 00:18:37.952 "claimed": true, 00:18:37.952 "claim_type": "read_many_write_one", 00:18:37.952 "zoned": false, 00:18:37.952 "supported_io_types": { 00:18:37.952 "read": true, 00:18:37.952 "write": true, 00:18:37.952 "unmap": true, 00:18:37.952 "flush": true, 00:18:37.952 "reset": true, 00:18:37.952 "nvme_admin": true, 00:18:37.952 "nvme_io": true, 00:18:37.952 "nvme_io_md": false, 00:18:37.952 "write_zeroes": true, 00:18:37.952 "zcopy": false, 00:18:37.952 "get_zone_info": false, 00:18:37.952 "zone_management": false, 00:18:37.952 "zone_append": false, 00:18:37.952 "compare": true, 00:18:37.952 "compare_and_write": false, 00:18:37.952 "abort": true, 00:18:37.952 "seek_hole": false, 00:18:37.952 "seek_data": false, 00:18:37.952 "copy": true, 00:18:37.952 "nvme_iov_md": false 00:18:37.952 }, 00:18:37.952 "driver_specific": { 00:18:37.952 "nvme": [ 00:18:37.952 { 00:18:37.952 "pci_address": "0000:00:11.0", 00:18:37.952 "trid": { 00:18:37.952 "trtype": "PCIe", 00:18:37.952 "traddr": "0000:00:11.0" 00:18:37.952 }, 00:18:37.952 "ctrlr_data": { 00:18:37.952 "cntlid": 0, 00:18:37.952 "vendor_id": "0x1b36", 00:18:37.952 "model_number": "QEMU NVMe Ctrl", 00:18:37.952 "serial_number": "12341", 00:18:37.952 "firmware_revision": "8.0.0", 00:18:37.952 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:37.952 "oacs": { 00:18:37.952 "security": 0, 00:18:37.952 "format": 1, 00:18:37.952 "firmware": 0, 00:18:37.952 "ns_manage": 1 00:18:37.952 }, 00:18:37.952 "multi_ctrlr": false, 00:18:37.952 "ana_reporting": false 00:18:37.952 }, 00:18:37.952 "vs": { 00:18:37.952 "nvme_version": "1.4" 00:18:37.952 }, 00:18:37.952 "ns_data": { 00:18:37.952 "id": 1, 00:18:37.952 "can_share": false 00:18:37.952 } 00:18:37.952 } 00:18:37.952 ], 00:18:37.952 "mp_policy": "active_passive" 00:18:37.952 } 00:18:37.952 } 00:18:37.952 ]' 00:18:37.952 13:14:29 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:37.952 13:14:29 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:37.952 13:14:29 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:37.952 13:14:30 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:18:37.952 13:14:30 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:18:37.952 13:14:30 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:18:37.952 13:14:30 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:18:37.952 13:14:30 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:37.952 13:14:30 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:18:37.952 13:14:30 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:37.952 13:14:30 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:38.209 13:14:30 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=3cefecd9-3527-4f8c-9fa4-d256b85c145e 00:18:38.209 13:14:30 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:18:38.209 13:14:30 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3cefecd9-3527-4f8c-9fa4-d256b85c145e 00:18:38.467 13:14:30 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:38.735 13:14:30 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=281256ae-7e34-4298-8915-00a7e0faa7be 00:18:38.735 13:14:30 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 281256ae-7e34-4298-8915-00a7e0faa7be 00:18:39.006 13:14:31 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=855c2e8d-0f5f-489b-86a6-e1e144d2e4dd 00:18:39.006 13:14:31 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 855c2e8d-0f5f-489b-86a6-e1e144d2e4dd 00:18:39.006 13:14:31 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:18:39.006 13:14:31 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:39.006 13:14:31 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=855c2e8d-0f5f-489b-86a6-e1e144d2e4dd 00:18:39.006 13:14:31 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:18:39.006 13:14:31 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 855c2e8d-0f5f-489b-86a6-e1e144d2e4dd 00:18:39.006 13:14:31 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=855c2e8d-0f5f-489b-86a6-e1e144d2e4dd 00:18:39.006 13:14:31 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:39.006 13:14:31 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:39.006 13:14:31 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:39.006 13:14:31 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 855c2e8d-0f5f-489b-86a6-e1e144d2e4dd 00:18:39.264 13:14:31 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:39.264 { 00:18:39.264 "name": "855c2e8d-0f5f-489b-86a6-e1e144d2e4dd", 00:18:39.264 "aliases": [ 00:18:39.264 "lvs/nvme0n1p0" 00:18:39.264 ], 00:18:39.264 "product_name": "Logical Volume", 00:18:39.264 "block_size": 4096, 00:18:39.264 "num_blocks": 26476544, 00:18:39.264 "uuid": "855c2e8d-0f5f-489b-86a6-e1e144d2e4dd", 00:18:39.264 "assigned_rate_limits": { 00:18:39.264 "rw_ios_per_sec": 0, 00:18:39.264 "rw_mbytes_per_sec": 0, 00:18:39.264 "r_mbytes_per_sec": 0, 00:18:39.264 "w_mbytes_per_sec": 0 00:18:39.264 }, 00:18:39.264 "claimed": false, 00:18:39.264 "zoned": false, 00:18:39.264 "supported_io_types": { 00:18:39.264 "read": true, 00:18:39.264 "write": true, 00:18:39.264 "unmap": true, 00:18:39.264 "flush": false, 00:18:39.264 "reset": true, 00:18:39.264 "nvme_admin": false, 00:18:39.264 "nvme_io": false, 00:18:39.264 "nvme_io_md": false, 00:18:39.264 "write_zeroes": true, 00:18:39.264 "zcopy": false, 00:18:39.264 "get_zone_info": false, 00:18:39.264 "zone_management": false, 00:18:39.264 "zone_append": false, 00:18:39.264 "compare": false, 00:18:39.264 "compare_and_write": false, 00:18:39.264 "abort": false, 00:18:39.264 "seek_hole": true, 00:18:39.264 "seek_data": true, 00:18:39.264 "copy": false, 00:18:39.264 "nvme_iov_md": false 00:18:39.264 }, 00:18:39.264 "driver_specific": { 00:18:39.264 "lvol": { 00:18:39.264 "lvol_store_uuid": "281256ae-7e34-4298-8915-00a7e0faa7be", 00:18:39.264 "base_bdev": "nvme0n1", 00:18:39.264 "thin_provision": true, 00:18:39.264 "num_allocated_clusters": 0, 00:18:39.264 "snapshot": false, 00:18:39.264 "clone": false, 00:18:39.264 "esnap_clone": false 00:18:39.264 } 00:18:39.264 } 00:18:39.264 } 00:18:39.264 ]' 00:18:39.264 13:14:31 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:39.522 13:14:31 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:39.522 13:14:31 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:39.522 13:14:31 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:39.522 13:14:31 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:39.522 13:14:31 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:18:39.522 13:14:31 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:18:39.522 13:14:31 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:18:39.522 13:14:31 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:39.780 13:14:31 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:39.780 13:14:31 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:39.780 13:14:31 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 855c2e8d-0f5f-489b-86a6-e1e144d2e4dd 00:18:39.780 13:14:31 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=855c2e8d-0f5f-489b-86a6-e1e144d2e4dd 00:18:39.780 13:14:31 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:39.780 13:14:31 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:39.780 13:14:31 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:39.780 13:14:31 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 855c2e8d-0f5f-489b-86a6-e1e144d2e4dd 00:18:40.038 13:14:32 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:40.038 { 00:18:40.038 "name": "855c2e8d-0f5f-489b-86a6-e1e144d2e4dd", 00:18:40.038 "aliases": [ 00:18:40.038 "lvs/nvme0n1p0" 00:18:40.038 ], 00:18:40.038 "product_name": "Logical Volume", 00:18:40.038 "block_size": 4096, 00:18:40.038 "num_blocks": 26476544, 00:18:40.038 "uuid": "855c2e8d-0f5f-489b-86a6-e1e144d2e4dd", 00:18:40.038 "assigned_rate_limits": { 00:18:40.038 "rw_ios_per_sec": 0, 00:18:40.038 "rw_mbytes_per_sec": 0, 00:18:40.038 "r_mbytes_per_sec": 0, 00:18:40.038 "w_mbytes_per_sec": 0 00:18:40.038 }, 00:18:40.038 "claimed": false, 00:18:40.038 "zoned": false, 00:18:40.038 "supported_io_types": { 00:18:40.038 "read": true, 00:18:40.038 "write": true, 00:18:40.038 "unmap": true, 00:18:40.038 "flush": false, 00:18:40.038 "reset": true, 00:18:40.038 "nvme_admin": false, 00:18:40.038 "nvme_io": false, 00:18:40.038 "nvme_io_md": false, 00:18:40.038 "write_zeroes": true, 00:18:40.038 "zcopy": false, 00:18:40.038 "get_zone_info": false, 00:18:40.038 "zone_management": false, 00:18:40.038 "zone_append": false, 00:18:40.038 "compare": false, 00:18:40.038 "compare_and_write": false, 00:18:40.038 "abort": false, 00:18:40.038 "seek_hole": true, 00:18:40.038 "seek_data": true, 00:18:40.038 "copy": false, 00:18:40.038 "nvme_iov_md": false 00:18:40.038 }, 00:18:40.038 "driver_specific": { 00:18:40.038 "lvol": { 00:18:40.038 "lvol_store_uuid": "281256ae-7e34-4298-8915-00a7e0faa7be", 00:18:40.038 "base_bdev": "nvme0n1", 00:18:40.038 "thin_provision": true, 00:18:40.038 "num_allocated_clusters": 0, 00:18:40.038 "snapshot": false, 00:18:40.038 "clone": false, 00:18:40.038 "esnap_clone": false 00:18:40.038 } 00:18:40.038 } 00:18:40.038 } 00:18:40.038 ]' 00:18:40.038 13:14:32 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:40.038 13:14:32 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:40.038 13:14:32 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:40.038 13:14:32 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:40.038 13:14:32 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:40.038 13:14:32 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:18:40.038 13:14:32 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:18:40.038 13:14:32 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:40.296 13:14:32 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:18:40.296 13:14:32 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:18:40.296 13:14:32 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 855c2e8d-0f5f-489b-86a6-e1e144d2e4dd 00:18:40.296 13:14:32 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=855c2e8d-0f5f-489b-86a6-e1e144d2e4dd 00:18:40.296 13:14:32 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:40.296 13:14:32 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:40.296 13:14:32 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:40.296 13:14:32 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 855c2e8d-0f5f-489b-86a6-e1e144d2e4dd 00:18:40.862 13:14:32 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:40.862 { 00:18:40.862 "name": "855c2e8d-0f5f-489b-86a6-e1e144d2e4dd", 00:18:40.862 "aliases": [ 00:18:40.862 "lvs/nvme0n1p0" 00:18:40.862 ], 00:18:40.862 "product_name": "Logical Volume", 00:18:40.862 "block_size": 4096, 00:18:40.862 "num_blocks": 26476544, 00:18:40.862 "uuid": "855c2e8d-0f5f-489b-86a6-e1e144d2e4dd", 00:18:40.862 "assigned_rate_limits": { 00:18:40.862 "rw_ios_per_sec": 0, 00:18:40.862 "rw_mbytes_per_sec": 0, 00:18:40.862 "r_mbytes_per_sec": 0, 00:18:40.862 "w_mbytes_per_sec": 0 00:18:40.862 }, 00:18:40.862 "claimed": false, 00:18:40.862 "zoned": false, 00:18:40.862 "supported_io_types": { 00:18:40.862 "read": true, 00:18:40.862 "write": true, 00:18:40.862 "unmap": true, 00:18:40.862 "flush": false, 00:18:40.862 "reset": true, 00:18:40.862 "nvme_admin": false, 00:18:40.862 "nvme_io": false, 00:18:40.862 "nvme_io_md": false, 00:18:40.862 "write_zeroes": true, 00:18:40.862 "zcopy": false, 00:18:40.862 "get_zone_info": false, 00:18:40.862 "zone_management": false, 00:18:40.862 "zone_append": false, 00:18:40.862 "compare": false, 00:18:40.862 "compare_and_write": false, 00:18:40.862 "abort": false, 00:18:40.862 "seek_hole": true, 00:18:40.862 "seek_data": true, 00:18:40.862 "copy": false, 00:18:40.862 "nvme_iov_md": false 00:18:40.862 }, 00:18:40.862 "driver_specific": { 00:18:40.862 "lvol": { 00:18:40.862 "lvol_store_uuid": "281256ae-7e34-4298-8915-00a7e0faa7be", 00:18:40.862 "base_bdev": "nvme0n1", 00:18:40.862 "thin_provision": true, 00:18:40.862 "num_allocated_clusters": 0, 00:18:40.862 "snapshot": false, 00:18:40.862 "clone": false, 00:18:40.862 "esnap_clone": false 00:18:40.862 } 00:18:40.862 } 00:18:40.862 } 00:18:40.862 ]' 00:18:40.862 13:14:32 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:40.862 13:14:32 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:40.862 13:14:32 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:40.862 13:14:32 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:40.862 13:14:32 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:40.862 13:14:32 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:18:40.862 13:14:32 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:18:40.862 13:14:32 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 855c2e8d-0f5f-489b-86a6-e1e144d2e4dd -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:18:41.121 [2024-07-25 13:14:33.128680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.121 [2024-07-25 13:14:33.128752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:41.121 [2024-07-25 13:14:33.128777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:41.121 [2024-07-25 13:14:33.128792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.121 [2024-07-25 13:14:33.132168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.121 [2024-07-25 13:14:33.132220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:41.121 [2024-07-25 13:14:33.132240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.337 ms 00:18:41.121 [2024-07-25 13:14:33.132255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.121 [2024-07-25 13:14:33.132433] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:41.121 [2024-07-25 13:14:33.133403] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:41.121 [2024-07-25 13:14:33.133448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.121 [2024-07-25 13:14:33.133471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:41.121 [2024-07-25 13:14:33.133486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.025 ms 00:18:41.121 [2024-07-25 13:14:33.133500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.121 [2024-07-25 13:14:33.133641] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 953998eb-5280-4452-a782-072824cd0df1 00:18:41.121 [2024-07-25 13:14:33.134720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.121 [2024-07-25 13:14:33.134764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:41.121 [2024-07-25 13:14:33.134786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:41.121 [2024-07-25 13:14:33.134800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.121 [2024-07-25 13:14:33.139654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.121 [2024-07-25 13:14:33.139712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:41.121 [2024-07-25 13:14:33.139736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.755 ms 00:18:41.121 [2024-07-25 13:14:33.139749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.121 [2024-07-25 13:14:33.139944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.121 [2024-07-25 13:14:33.139969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:41.121 [2024-07-25 13:14:33.139987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:18:41.121 [2024-07-25 13:14:33.140000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.121 [2024-07-25 13:14:33.140060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.121 [2024-07-25 13:14:33.140080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:41.121 [2024-07-25 13:14:33.140096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:18:41.121 [2024-07-25 13:14:33.140140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.121 [2024-07-25 13:14:33.140199] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:41.121 [2024-07-25 13:14:33.144832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.121 [2024-07-25 13:14:33.144878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:41.121 [2024-07-25 13:14:33.144896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.648 ms 00:18:41.121 [2024-07-25 13:14:33.144911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.121 [2024-07-25 13:14:33.145052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.121 [2024-07-25 13:14:33.145078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:41.121 [2024-07-25 13:14:33.145094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:41.121 [2024-07-25 13:14:33.145130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.121 [2024-07-25 13:14:33.145173] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:41.121 [2024-07-25 13:14:33.145338] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:41.121 [2024-07-25 13:14:33.145357] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:41.121 [2024-07-25 13:14:33.145378] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:41.121 [2024-07-25 13:14:33.145395] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:41.121 [2024-07-25 13:14:33.145427] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:41.121 [2024-07-25 13:14:33.145443] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:41.121 [2024-07-25 13:14:33.145458] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:41.121 [2024-07-25 13:14:33.145470] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:41.121 [2024-07-25 13:14:33.145509] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:41.121 [2024-07-25 13:14:33.145523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.121 [2024-07-25 13:14:33.145538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:41.121 [2024-07-25 13:14:33.145551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:18:41.121 [2024-07-25 13:14:33.145565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.121 [2024-07-25 13:14:33.145669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.121 [2024-07-25 13:14:33.145688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:41.121 [2024-07-25 13:14:33.145701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:18:41.121 [2024-07-25 13:14:33.145718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.121 [2024-07-25 13:14:33.145843] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:41.121 [2024-07-25 13:14:33.145867] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:41.121 [2024-07-25 13:14:33.145880] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:41.121 [2024-07-25 13:14:33.145895] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:41.121 [2024-07-25 13:14:33.145908] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:41.121 [2024-07-25 13:14:33.145921] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:41.121 [2024-07-25 13:14:33.145933] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:41.121 [2024-07-25 13:14:33.145946] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:41.122 [2024-07-25 13:14:33.145958] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:41.122 [2024-07-25 13:14:33.145972] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:41.122 [2024-07-25 13:14:33.145983] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:41.122 [2024-07-25 13:14:33.145997] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:41.122 [2024-07-25 13:14:33.146009] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:41.122 [2024-07-25 13:14:33.146041] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:41.122 [2024-07-25 13:14:33.146053] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:41.122 [2024-07-25 13:14:33.146067] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:41.122 [2024-07-25 13:14:33.146079] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:41.122 [2024-07-25 13:14:33.146095] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:41.122 [2024-07-25 13:14:33.146107] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:41.122 [2024-07-25 13:14:33.146144] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:41.122 [2024-07-25 13:14:33.146160] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:41.122 [2024-07-25 13:14:33.146174] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:41.122 [2024-07-25 13:14:33.146187] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:41.122 [2024-07-25 13:14:33.146200] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:41.122 [2024-07-25 13:14:33.146212] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:41.122 [2024-07-25 13:14:33.146225] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:41.122 [2024-07-25 13:14:33.146237] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:41.122 [2024-07-25 13:14:33.146251] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:41.122 [2024-07-25 13:14:33.146262] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:41.122 [2024-07-25 13:14:33.146276] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:41.122 [2024-07-25 13:14:33.146288] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:41.122 [2024-07-25 13:14:33.146301] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:41.122 [2024-07-25 13:14:33.146313] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:41.122 [2024-07-25 13:14:33.146329] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:41.122 [2024-07-25 13:14:33.146348] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:41.122 [2024-07-25 13:14:33.146362] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:41.122 [2024-07-25 13:14:33.146374] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:41.122 [2024-07-25 13:14:33.146387] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:41.122 [2024-07-25 13:14:33.146399] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:41.122 [2024-07-25 13:14:33.146415] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:41.122 [2024-07-25 13:14:33.146427] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:41.122 [2024-07-25 13:14:33.146440] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:41.122 [2024-07-25 13:14:33.146467] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:41.122 [2024-07-25 13:14:33.146480] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:41.122 [2024-07-25 13:14:33.146493] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:41.122 [2024-07-25 13:14:33.146507] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:41.122 [2024-07-25 13:14:33.146518] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:41.122 [2024-07-25 13:14:33.146537] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:41.122 [2024-07-25 13:14:33.146549] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:41.122 [2024-07-25 13:14:33.146565] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:41.122 [2024-07-25 13:14:33.146577] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:41.122 [2024-07-25 13:14:33.146590] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:41.122 [2024-07-25 13:14:33.146601] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:41.122 [2024-07-25 13:14:33.146620] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:41.122 [2024-07-25 13:14:33.146635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:41.122 [2024-07-25 13:14:33.146651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:41.122 [2024-07-25 13:14:33.146663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:41.122 [2024-07-25 13:14:33.146678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:41.122 [2024-07-25 13:14:33.146690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:41.122 [2024-07-25 13:14:33.146704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:41.122 [2024-07-25 13:14:33.146717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:41.122 [2024-07-25 13:14:33.146731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:41.122 [2024-07-25 13:14:33.146743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:41.122 [2024-07-25 13:14:33.146759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:41.122 [2024-07-25 13:14:33.146772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:41.122 [2024-07-25 13:14:33.146788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:41.122 [2024-07-25 13:14:33.146803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:41.122 [2024-07-25 13:14:33.146817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:41.122 [2024-07-25 13:14:33.146830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:41.122 [2024-07-25 13:14:33.146844] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:41.122 [2024-07-25 13:14:33.146857] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:41.122 [2024-07-25 13:14:33.146872] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:41.122 [2024-07-25 13:14:33.146885] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:41.122 [2024-07-25 13:14:33.146899] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:41.122 [2024-07-25 13:14:33.146912] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:41.122 [2024-07-25 13:14:33.146928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.122 [2024-07-25 13:14:33.146940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:41.122 [2024-07-25 13:14:33.146955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.146 ms 00:18:41.122 [2024-07-25 13:14:33.146967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.122 [2024-07-25 13:14:33.147078] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:41.122 [2024-07-25 13:14:33.147097] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:43.021 [2024-07-25 13:14:35.205526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.021 [2024-07-25 13:14:35.205611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:43.021 [2024-07-25 13:14:35.205639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2058.448 ms 00:18:43.021 [2024-07-25 13:14:35.205654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.279 [2024-07-25 13:14:35.238841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.279 [2024-07-25 13:14:35.238917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:43.280 [2024-07-25 13:14:35.238945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.820 ms 00:18:43.280 [2024-07-25 13:14:35.238960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.280 [2024-07-25 13:14:35.239211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.280 [2024-07-25 13:14:35.239235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:43.280 [2024-07-25 13:14:35.239257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:18:43.280 [2024-07-25 13:14:35.239271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.280 [2024-07-25 13:14:35.290468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.280 [2024-07-25 13:14:35.290586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:43.280 [2024-07-25 13:14:35.290635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.137 ms 00:18:43.280 [2024-07-25 13:14:35.290664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.280 [2024-07-25 13:14:35.290884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.280 [2024-07-25 13:14:35.290922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:43.280 [2024-07-25 13:14:35.290957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:43.280 [2024-07-25 13:14:35.290985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.280 [2024-07-25 13:14:35.291517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.280 [2024-07-25 13:14:35.291592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:43.280 [2024-07-25 13:14:35.291634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:18:43.280 [2024-07-25 13:14:35.291662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.280 [2024-07-25 13:14:35.291931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.280 [2024-07-25 13:14:35.291975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:43.280 [2024-07-25 13:14:35.292013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:18:43.280 [2024-07-25 13:14:35.292041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.280 [2024-07-25 13:14:35.319630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.280 [2024-07-25 13:14:35.319725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:43.280 [2024-07-25 13:14:35.319771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.515 ms 00:18:43.280 [2024-07-25 13:14:35.319792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.280 [2024-07-25 13:14:35.334843] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:43.280 [2024-07-25 13:14:35.349257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.280 [2024-07-25 13:14:35.349344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:43.280 [2024-07-25 13:14:35.349367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.241 ms 00:18:43.280 [2024-07-25 13:14:35.349384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.280 [2024-07-25 13:14:35.411472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.280 [2024-07-25 13:14:35.411568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:43.280 [2024-07-25 13:14:35.411592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.916 ms 00:18:43.280 [2024-07-25 13:14:35.411608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.280 [2024-07-25 13:14:35.411948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.280 [2024-07-25 13:14:35.411982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:43.280 [2024-07-25 13:14:35.411999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:18:43.280 [2024-07-25 13:14:35.412018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.280 [2024-07-25 13:14:35.444243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.280 [2024-07-25 13:14:35.444348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:43.280 [2024-07-25 13:14:35.444372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.172 ms 00:18:43.280 [2024-07-25 13:14:35.444389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.538 [2024-07-25 13:14:35.475873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.538 [2024-07-25 13:14:35.475951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:43.538 [2024-07-25 13:14:35.475976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.331 ms 00:18:43.538 [2024-07-25 13:14:35.475992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.538 [2024-07-25 13:14:35.476868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.538 [2024-07-25 13:14:35.476912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:43.538 [2024-07-25 13:14:35.476930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.689 ms 00:18:43.538 [2024-07-25 13:14:35.476946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.538 [2024-07-25 13:14:35.564781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.538 [2024-07-25 13:14:35.564873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:43.538 [2024-07-25 13:14:35.564898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.787 ms 00:18:43.538 [2024-07-25 13:14:35.564928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.538 [2024-07-25 13:14:35.598489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.539 [2024-07-25 13:14:35.598576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:43.539 [2024-07-25 13:14:35.598604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.264 ms 00:18:43.539 [2024-07-25 13:14:35.598621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.539 [2024-07-25 13:14:35.631691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.539 [2024-07-25 13:14:35.631781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:43.539 [2024-07-25 13:14:35.631805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.905 ms 00:18:43.539 [2024-07-25 13:14:35.631821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.539 [2024-07-25 13:14:35.664797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.539 [2024-07-25 13:14:35.665125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:43.539 [2024-07-25 13:14:35.665159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.801 ms 00:18:43.539 [2024-07-25 13:14:35.665178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.539 [2024-07-25 13:14:35.665367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.539 [2024-07-25 13:14:35.665393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:43.539 [2024-07-25 13:14:35.665409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:43.539 [2024-07-25 13:14:35.665428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.539 [2024-07-25 13:14:35.665531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.539 [2024-07-25 13:14:35.665552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:43.539 [2024-07-25 13:14:35.665567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:18:43.539 [2024-07-25 13:14:35.665609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.539 [2024-07-25 13:14:35.666723] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:43.539 [2024-07-25 13:14:35.671228] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2537.689 ms, result 0 00:18:43.539 [2024-07-25 13:14:35.672056] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:43.539 { 00:18:43.539 "name": "ftl0", 00:18:43.539 "uuid": "953998eb-5280-4452-a782-072824cd0df1" 00:18:43.539 } 00:18:43.539 13:14:35 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:18:43.539 13:14:35 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:18:43.539 13:14:35 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:43.539 13:14:35 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:18:43.539 13:14:35 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:43.539 13:14:35 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:43.539 13:14:35 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:44.104 13:14:36 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:44.363 [ 00:18:44.363 { 00:18:44.363 "name": "ftl0", 00:18:44.363 "aliases": [ 00:18:44.363 "953998eb-5280-4452-a782-072824cd0df1" 00:18:44.363 ], 00:18:44.363 "product_name": "FTL disk", 00:18:44.363 "block_size": 4096, 00:18:44.363 "num_blocks": 23592960, 00:18:44.363 "uuid": "953998eb-5280-4452-a782-072824cd0df1", 00:18:44.363 "assigned_rate_limits": { 00:18:44.363 "rw_ios_per_sec": 0, 00:18:44.363 "rw_mbytes_per_sec": 0, 00:18:44.363 "r_mbytes_per_sec": 0, 00:18:44.363 "w_mbytes_per_sec": 0 00:18:44.363 }, 00:18:44.363 "claimed": false, 00:18:44.363 "zoned": false, 00:18:44.363 "supported_io_types": { 00:18:44.363 "read": true, 00:18:44.363 "write": true, 00:18:44.363 "unmap": true, 00:18:44.363 "flush": true, 00:18:44.363 "reset": false, 00:18:44.363 "nvme_admin": false, 00:18:44.363 "nvme_io": false, 00:18:44.363 "nvme_io_md": false, 00:18:44.363 "write_zeroes": true, 00:18:44.363 "zcopy": false, 00:18:44.363 "get_zone_info": false, 00:18:44.363 "zone_management": false, 00:18:44.363 "zone_append": false, 00:18:44.363 "compare": false, 00:18:44.363 "compare_and_write": false, 00:18:44.363 "abort": false, 00:18:44.363 "seek_hole": false, 00:18:44.363 "seek_data": false, 00:18:44.363 "copy": false, 00:18:44.363 "nvme_iov_md": false 00:18:44.363 }, 00:18:44.363 "driver_specific": { 00:18:44.363 "ftl": { 00:18:44.363 "base_bdev": "855c2e8d-0f5f-489b-86a6-e1e144d2e4dd", 00:18:44.363 "cache": "nvc0n1p0" 00:18:44.363 } 00:18:44.363 } 00:18:44.363 } 00:18:44.363 ] 00:18:44.363 13:14:36 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:18:44.363 13:14:36 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:18:44.363 13:14:36 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:44.622 13:14:36 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:18:44.622 13:14:36 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:18:44.880 13:14:37 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:18:44.880 { 00:18:44.880 "name": "ftl0", 00:18:44.880 "aliases": [ 00:18:44.880 "953998eb-5280-4452-a782-072824cd0df1" 00:18:44.880 ], 00:18:44.880 "product_name": "FTL disk", 00:18:44.880 "block_size": 4096, 00:18:44.880 "num_blocks": 23592960, 00:18:44.880 "uuid": "953998eb-5280-4452-a782-072824cd0df1", 00:18:44.880 "assigned_rate_limits": { 00:18:44.880 "rw_ios_per_sec": 0, 00:18:44.880 "rw_mbytes_per_sec": 0, 00:18:44.880 "r_mbytes_per_sec": 0, 00:18:44.880 "w_mbytes_per_sec": 0 00:18:44.880 }, 00:18:44.880 "claimed": false, 00:18:44.880 "zoned": false, 00:18:44.880 "supported_io_types": { 00:18:44.880 "read": true, 00:18:44.880 "write": true, 00:18:44.880 "unmap": true, 00:18:44.880 "flush": true, 00:18:44.880 "reset": false, 00:18:44.880 "nvme_admin": false, 00:18:44.880 "nvme_io": false, 00:18:44.880 "nvme_io_md": false, 00:18:44.880 "write_zeroes": true, 00:18:44.880 "zcopy": false, 00:18:44.880 "get_zone_info": false, 00:18:44.880 "zone_management": false, 00:18:44.880 "zone_append": false, 00:18:44.880 "compare": false, 00:18:44.880 "compare_and_write": false, 00:18:44.880 "abort": false, 00:18:44.880 "seek_hole": false, 00:18:44.880 "seek_data": false, 00:18:44.880 "copy": false, 00:18:44.880 "nvme_iov_md": false 00:18:44.880 }, 00:18:44.880 "driver_specific": { 00:18:44.880 "ftl": { 00:18:44.880 "base_bdev": "855c2e8d-0f5f-489b-86a6-e1e144d2e4dd", 00:18:44.880 "cache": "nvc0n1p0" 00:18:44.880 } 00:18:44.880 } 00:18:44.880 } 00:18:44.880 ]' 00:18:44.880 13:14:37 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:18:45.138 13:14:37 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:18:45.138 13:14:37 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:45.138 [2024-07-25 13:14:37.305303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.138 [2024-07-25 13:14:37.305380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:45.138 [2024-07-25 13:14:37.305408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:45.138 [2024-07-25 13:14:37.305423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.138 [2024-07-25 13:14:37.305478] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:45.138 [2024-07-25 13:14:37.308854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.138 [2024-07-25 13:14:37.308904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:45.138 [2024-07-25 13:14:37.308924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.347 ms 00:18:45.138 [2024-07-25 13:14:37.308944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.138 [2024-07-25 13:14:37.309615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.138 [2024-07-25 13:14:37.309662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:45.138 [2024-07-25 13:14:37.309682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:18:45.138 [2024-07-25 13:14:37.309702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.138 [2024-07-25 13:14:37.313462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.138 [2024-07-25 13:14:37.313531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:45.138 [2024-07-25 13:14:37.313551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.720 ms 00:18:45.138 [2024-07-25 13:14:37.313567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.138 [2024-07-25 13:14:37.321168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.138 [2024-07-25 13:14:37.321233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:45.138 [2024-07-25 13:14:37.321253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.523 ms 00:18:45.138 [2024-07-25 13:14:37.321269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.397 [2024-07-25 13:14:37.353770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.397 [2024-07-25 13:14:37.353866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:45.397 [2024-07-25 13:14:37.353891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.353 ms 00:18:45.397 [2024-07-25 13:14:37.353913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.397 [2024-07-25 13:14:37.373469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.397 [2024-07-25 13:14:37.373569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:45.397 [2024-07-25 13:14:37.373599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.382 ms 00:18:45.397 [2024-07-25 13:14:37.373616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.397 [2024-07-25 13:14:37.373968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.397 [2024-07-25 13:14:37.373996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:45.397 [2024-07-25 13:14:37.374012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:18:45.397 [2024-07-25 13:14:37.374028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.397 [2024-07-25 13:14:37.407820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.397 [2024-07-25 13:14:37.407914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:45.397 [2024-07-25 13:14:37.407938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.745 ms 00:18:45.397 [2024-07-25 13:14:37.407954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.397 [2024-07-25 13:14:37.440398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.397 [2024-07-25 13:14:37.440495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:45.397 [2024-07-25 13:14:37.440520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.251 ms 00:18:45.397 [2024-07-25 13:14:37.440539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.397 [2024-07-25 13:14:37.472650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.397 [2024-07-25 13:14:37.472747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:45.397 [2024-07-25 13:14:37.472772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.935 ms 00:18:45.397 [2024-07-25 13:14:37.472789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.397 [2024-07-25 13:14:37.505347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.397 [2024-07-25 13:14:37.505448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:45.397 [2024-07-25 13:14:37.505473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.323 ms 00:18:45.397 [2024-07-25 13:14:37.505488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.397 [2024-07-25 13:14:37.505660] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:45.397 [2024-07-25 13:14:37.505697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.505714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.505730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.505744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.505759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.505773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.505793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.505807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.505822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.505836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.505851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.505865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.505881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.505895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.505910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.505924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.505939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.505953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.505968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.505981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.505997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:45.397 [2024-07-25 13:14:37.506713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:45.398 [2024-07-25 13:14:37.506728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:45.398 [2024-07-25 13:14:37.506741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:45.398 [2024-07-25 13:14:37.506756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:45.398 [2024-07-25 13:14:37.506769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:45.398 [2024-07-25 13:14:37.506784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:45.398 [2024-07-25 13:14:37.506798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:45.398 [2024-07-25 13:14:37.506813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:45.398 [2024-07-25 13:14:37.506827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:45.398 [2024-07-25 13:14:37.506845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:45.398 [2024-07-25 13:14:37.506859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:45.398 [2024-07-25 13:14:37.506876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:45.398 [2024-07-25 13:14:37.506890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:45.398 [2024-07-25 13:14:37.506905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:45.398 [2024-07-25 13:14:37.506918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:45.398 [2024-07-25 13:14:37.506933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:45.398 [2024-07-25 13:14:37.506946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:45.398 [2024-07-25 13:14:37.506962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:45.398 [2024-07-25 13:14:37.506975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:45.398 [2024-07-25 13:14:37.506990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:45.398 [2024-07-25 13:14:37.507004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:45.398 [2024-07-25 13:14:37.507019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:45.398 [2024-07-25 13:14:37.507033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:45.398 [2024-07-25 13:14:37.507048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:45.398 [2024-07-25 13:14:37.507062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:45.398 [2024-07-25 13:14:37.507079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:45.398 [2024-07-25 13:14:37.507092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:45.398 [2024-07-25 13:14:37.507124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:45.398 [2024-07-25 13:14:37.507142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:45.398 [2024-07-25 13:14:37.507158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:45.398 [2024-07-25 13:14:37.507172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:45.398 [2024-07-25 13:14:37.507187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:45.398 [2024-07-25 13:14:37.507201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:45.398 [2024-07-25 13:14:37.507217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:45.398 [2024-07-25 13:14:37.507230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:45.398 [2024-07-25 13:14:37.507245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:45.398 [2024-07-25 13:14:37.507259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:45.398 [2024-07-25 13:14:37.507274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:45.398 [2024-07-25 13:14:37.507288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:45.398 [2024-07-25 13:14:37.507314] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:45.398 [2024-07-25 13:14:37.507328] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 953998eb-5280-4452-a782-072824cd0df1 00:18:45.398 [2024-07-25 13:14:37.507347] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:45.398 [2024-07-25 13:14:37.507363] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:45.398 [2024-07-25 13:14:37.507377] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:45.398 [2024-07-25 13:14:37.507391] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:45.398 [2024-07-25 13:14:37.507405] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:45.398 [2024-07-25 13:14:37.507418] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:45.398 [2024-07-25 13:14:37.507432] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:45.398 [2024-07-25 13:14:37.507444] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:45.398 [2024-07-25 13:14:37.507457] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:45.398 [2024-07-25 13:14:37.507471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.398 [2024-07-25 13:14:37.507486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:45.398 [2024-07-25 13:14:37.507502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.814 ms 00:18:45.398 [2024-07-25 13:14:37.507516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.398 [2024-07-25 13:14:37.524726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.398 [2024-07-25 13:14:37.524813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:45.398 [2024-07-25 13:14:37.524837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.156 ms 00:18:45.398 [2024-07-25 13:14:37.524857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.398 [2024-07-25 13:14:37.525436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.398 [2024-07-25 13:14:37.525472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:45.398 [2024-07-25 13:14:37.525491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.451 ms 00:18:45.398 [2024-07-25 13:14:37.525506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.656 [2024-07-25 13:14:37.590970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.656 [2024-07-25 13:14:37.591059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:45.656 [2024-07-25 13:14:37.591083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.656 [2024-07-25 13:14:37.591099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.656 [2024-07-25 13:14:37.591311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.656 [2024-07-25 13:14:37.591336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:45.656 [2024-07-25 13:14:37.591352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.656 [2024-07-25 13:14:37.591366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.656 [2024-07-25 13:14:37.591472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.656 [2024-07-25 13:14:37.591498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:45.656 [2024-07-25 13:14:37.591513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.656 [2024-07-25 13:14:37.591532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.656 [2024-07-25 13:14:37.591571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.656 [2024-07-25 13:14:37.591590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:45.656 [2024-07-25 13:14:37.591604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.656 [2024-07-25 13:14:37.591619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.656 [2024-07-25 13:14:37.698164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.656 [2024-07-25 13:14:37.698251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:45.656 [2024-07-25 13:14:37.698275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.656 [2024-07-25 13:14:37.698290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.656 [2024-07-25 13:14:37.787660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.656 [2024-07-25 13:14:37.787752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:45.656 [2024-07-25 13:14:37.787776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.656 [2024-07-25 13:14:37.787792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.656 [2024-07-25 13:14:37.787928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.656 [2024-07-25 13:14:37.787959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:45.656 [2024-07-25 13:14:37.787973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.656 [2024-07-25 13:14:37.787992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.656 [2024-07-25 13:14:37.788058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.656 [2024-07-25 13:14:37.788077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:45.656 [2024-07-25 13:14:37.788091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.656 [2024-07-25 13:14:37.788137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.656 [2024-07-25 13:14:37.788314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.656 [2024-07-25 13:14:37.788344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:45.656 [2024-07-25 13:14:37.788382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.656 [2024-07-25 13:14:37.788398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.656 [2024-07-25 13:14:37.788476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.656 [2024-07-25 13:14:37.788501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:45.656 [2024-07-25 13:14:37.788517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.656 [2024-07-25 13:14:37.788532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.656 [2024-07-25 13:14:37.788597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.656 [2024-07-25 13:14:37.788619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:45.656 [2024-07-25 13:14:37.788635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.656 [2024-07-25 13:14:37.788652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.656 [2024-07-25 13:14:37.788720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.656 [2024-07-25 13:14:37.788743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:45.656 [2024-07-25 13:14:37.788757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.656 [2024-07-25 13:14:37.788772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.657 [2024-07-25 13:14:37.788992] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 483.684 ms, result 0 00:18:45.657 true 00:18:45.657 13:14:37 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 79224 00:18:45.657 13:14:37 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 79224 ']' 00:18:45.657 13:14:37 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 79224 00:18:45.657 13:14:37 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:18:45.657 13:14:37 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:45.657 13:14:37 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79224 00:18:45.657 killing process with pid 79224 00:18:45.657 13:14:37 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:45.657 13:14:37 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:45.657 13:14:37 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79224' 00:18:45.657 13:14:37 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 79224 00:18:45.657 13:14:37 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 79224 00:18:50.987 13:14:42 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:18:51.919 65536+0 records in 00:18:51.919 65536+0 records out 00:18:51.919 268435456 bytes (268 MB, 256 MiB) copied, 1.24121 s, 216 MB/s 00:18:51.919 13:14:43 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:51.919 [2024-07-25 13:14:44.015934] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:51.919 [2024-07-25 13:14:44.016173] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79420 ] 00:18:52.176 [2024-07-25 13:14:44.208181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.434 [2024-07-25 13:14:44.407533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.691 [2024-07-25 13:14:44.716785] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:52.691 [2024-07-25 13:14:44.716869] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:52.691 [2024-07-25 13:14:44.878331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.692 [2024-07-25 13:14:44.878405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:52.692 [2024-07-25 13:14:44.878427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:52.692 [2024-07-25 13:14:44.878439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.952 [2024-07-25 13:14:44.881693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.952 [2024-07-25 13:14:44.881743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:52.952 [2024-07-25 13:14:44.881760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.224 ms 00:18:52.952 [2024-07-25 13:14:44.881772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.952 [2024-07-25 13:14:44.881955] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:52.952 [2024-07-25 13:14:44.882924] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:52.952 [2024-07-25 13:14:44.882969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.952 [2024-07-25 13:14:44.882985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:52.952 [2024-07-25 13:14:44.882998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.027 ms 00:18:52.952 [2024-07-25 13:14:44.883010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.952 [2024-07-25 13:14:44.884348] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:52.952 [2024-07-25 13:14:44.900679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.952 [2024-07-25 13:14:44.900759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:52.952 [2024-07-25 13:14:44.900789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.330 ms 00:18:52.952 [2024-07-25 13:14:44.900801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.952 [2024-07-25 13:14:44.901014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.952 [2024-07-25 13:14:44.901038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:52.952 [2024-07-25 13:14:44.901052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:18:52.952 [2024-07-25 13:14:44.901063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.952 [2024-07-25 13:14:44.905860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.952 [2024-07-25 13:14:44.905925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:52.952 [2024-07-25 13:14:44.905943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.689 ms 00:18:52.952 [2024-07-25 13:14:44.905955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.952 [2024-07-25 13:14:44.906149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.952 [2024-07-25 13:14:44.906175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:52.952 [2024-07-25 13:14:44.906189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:18:52.952 [2024-07-25 13:14:44.906200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.952 [2024-07-25 13:14:44.906251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.952 [2024-07-25 13:14:44.906268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:52.952 [2024-07-25 13:14:44.906285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:52.952 [2024-07-25 13:14:44.906295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.952 [2024-07-25 13:14:44.906330] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:52.952 [2024-07-25 13:14:44.910636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.952 [2024-07-25 13:14:44.910680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:52.952 [2024-07-25 13:14:44.910696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.317 ms 00:18:52.952 [2024-07-25 13:14:44.910707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.952 [2024-07-25 13:14:44.910791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.952 [2024-07-25 13:14:44.910811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:52.952 [2024-07-25 13:14:44.910824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:52.952 [2024-07-25 13:14:44.910835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.952 [2024-07-25 13:14:44.910868] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:52.952 [2024-07-25 13:14:44.910899] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:52.952 [2024-07-25 13:14:44.910946] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:52.952 [2024-07-25 13:14:44.910966] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:52.952 [2024-07-25 13:14:44.911073] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:52.952 [2024-07-25 13:14:44.911089] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:52.952 [2024-07-25 13:14:44.911127] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:52.952 [2024-07-25 13:14:44.911148] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:52.952 [2024-07-25 13:14:44.911161] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:52.952 [2024-07-25 13:14:44.911179] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:52.952 [2024-07-25 13:14:44.911198] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:52.952 [2024-07-25 13:14:44.911209] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:52.952 [2024-07-25 13:14:44.911219] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:52.952 [2024-07-25 13:14:44.911232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.952 [2024-07-25 13:14:44.911244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:52.952 [2024-07-25 13:14:44.911256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:18:52.952 [2024-07-25 13:14:44.911267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.952 [2024-07-25 13:14:44.911395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.952 [2024-07-25 13:14:44.911413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:52.952 [2024-07-25 13:14:44.911430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:18:52.952 [2024-07-25 13:14:44.911441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.952 [2024-07-25 13:14:44.911552] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:52.952 [2024-07-25 13:14:44.911587] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:52.952 [2024-07-25 13:14:44.911600] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:52.952 [2024-07-25 13:14:44.911611] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:52.952 [2024-07-25 13:14:44.911622] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:52.953 [2024-07-25 13:14:44.911633] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:52.953 [2024-07-25 13:14:44.911643] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:52.953 [2024-07-25 13:14:44.911653] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:52.953 [2024-07-25 13:14:44.911663] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:52.953 [2024-07-25 13:14:44.911673] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:52.953 [2024-07-25 13:14:44.911683] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:52.953 [2024-07-25 13:14:44.911693] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:52.953 [2024-07-25 13:14:44.911703] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:52.953 [2024-07-25 13:14:44.911713] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:52.953 [2024-07-25 13:14:44.911723] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:52.953 [2024-07-25 13:14:44.911734] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:52.953 [2024-07-25 13:14:44.911745] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:52.953 [2024-07-25 13:14:44.911755] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:52.953 [2024-07-25 13:14:44.911780] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:52.953 [2024-07-25 13:14:44.911793] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:52.953 [2024-07-25 13:14:44.911803] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:52.953 [2024-07-25 13:14:44.911813] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:52.953 [2024-07-25 13:14:44.911823] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:52.953 [2024-07-25 13:14:44.911833] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:52.953 [2024-07-25 13:14:44.911843] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:52.953 [2024-07-25 13:14:44.911853] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:52.953 [2024-07-25 13:14:44.911863] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:52.953 [2024-07-25 13:14:44.911873] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:52.953 [2024-07-25 13:14:44.911883] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:52.953 [2024-07-25 13:14:44.911892] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:52.953 [2024-07-25 13:14:44.911902] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:52.953 [2024-07-25 13:14:44.911912] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:52.953 [2024-07-25 13:14:44.911922] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:52.953 [2024-07-25 13:14:44.911932] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:52.953 [2024-07-25 13:14:44.911942] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:52.953 [2024-07-25 13:14:44.911952] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:52.953 [2024-07-25 13:14:44.911962] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:52.953 [2024-07-25 13:14:44.911973] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:52.953 [2024-07-25 13:14:44.911984] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:52.953 [2024-07-25 13:14:44.911993] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:52.953 [2024-07-25 13:14:44.912003] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:52.953 [2024-07-25 13:14:44.912013] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:52.953 [2024-07-25 13:14:44.912023] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:52.953 [2024-07-25 13:14:44.912032] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:52.953 [2024-07-25 13:14:44.912044] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:52.953 [2024-07-25 13:14:44.912054] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:52.953 [2024-07-25 13:14:44.912065] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:52.953 [2024-07-25 13:14:44.912082] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:52.953 [2024-07-25 13:14:44.912092] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:52.953 [2024-07-25 13:14:44.912115] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:52.953 [2024-07-25 13:14:44.912130] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:52.953 [2024-07-25 13:14:44.912142] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:52.953 [2024-07-25 13:14:44.912153] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:52.953 [2024-07-25 13:14:44.912164] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:52.953 [2024-07-25 13:14:44.912178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:52.953 [2024-07-25 13:14:44.912191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:52.953 [2024-07-25 13:14:44.912202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:52.953 [2024-07-25 13:14:44.912213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:52.953 [2024-07-25 13:14:44.912225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:52.953 [2024-07-25 13:14:44.912236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:52.953 [2024-07-25 13:14:44.912247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:52.953 [2024-07-25 13:14:44.912258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:52.953 [2024-07-25 13:14:44.912269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:52.953 [2024-07-25 13:14:44.912280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:52.953 [2024-07-25 13:14:44.912291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:52.953 [2024-07-25 13:14:44.912302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:52.953 [2024-07-25 13:14:44.912313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:52.953 [2024-07-25 13:14:44.912325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:52.953 [2024-07-25 13:14:44.912337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:52.953 [2024-07-25 13:14:44.912348] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:52.953 [2024-07-25 13:14:44.912360] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:52.953 [2024-07-25 13:14:44.912373] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:52.953 [2024-07-25 13:14:44.912384] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:52.953 [2024-07-25 13:14:44.912395] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:52.953 [2024-07-25 13:14:44.912406] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:52.953 [2024-07-25 13:14:44.912418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.953 [2024-07-25 13:14:44.912430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:52.953 [2024-07-25 13:14:44.912442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.933 ms 00:18:52.953 [2024-07-25 13:14:44.912453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.953 [2024-07-25 13:14:44.954276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.953 [2024-07-25 13:14:44.954348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:52.953 [2024-07-25 13:14:44.954387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.745 ms 00:18:52.953 [2024-07-25 13:14:44.954400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.953 [2024-07-25 13:14:44.954613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.953 [2024-07-25 13:14:44.954636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:52.953 [2024-07-25 13:14:44.954656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:52.953 [2024-07-25 13:14:44.954668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.953 [2024-07-25 13:14:44.993444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.953 [2024-07-25 13:14:44.993509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:52.953 [2024-07-25 13:14:44.993530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.741 ms 00:18:52.953 [2024-07-25 13:14:44.993541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.953 [2024-07-25 13:14:44.993713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.953 [2024-07-25 13:14:44.993734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:52.953 [2024-07-25 13:14:44.993748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:52.953 [2024-07-25 13:14:44.993759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.954 [2024-07-25 13:14:44.994089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.954 [2024-07-25 13:14:44.994133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:52.954 [2024-07-25 13:14:44.994151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:18:52.954 [2024-07-25 13:14:44.994162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.954 [2024-07-25 13:14:44.994327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.954 [2024-07-25 13:14:44.994347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:52.954 [2024-07-25 13:14:44.994359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:18:52.954 [2024-07-25 13:14:44.994370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.954 [2024-07-25 13:14:45.010900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.954 [2024-07-25 13:14:45.010967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:52.954 [2024-07-25 13:14:45.010988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.497 ms 00:18:52.954 [2024-07-25 13:14:45.011000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.954 [2024-07-25 13:14:45.027612] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:52.954 [2024-07-25 13:14:45.027695] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:52.954 [2024-07-25 13:14:45.027718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.954 [2024-07-25 13:14:45.027730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:52.954 [2024-07-25 13:14:45.027747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.483 ms 00:18:52.954 [2024-07-25 13:14:45.027758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.954 [2024-07-25 13:14:45.058316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.954 [2024-07-25 13:14:45.058411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:52.954 [2024-07-25 13:14:45.058432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.392 ms 00:18:52.954 [2024-07-25 13:14:45.058445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.954 [2024-07-25 13:14:45.074644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.954 [2024-07-25 13:14:45.074713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:52.954 [2024-07-25 13:14:45.074734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.044 ms 00:18:52.954 [2024-07-25 13:14:45.074745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.954 [2024-07-25 13:14:45.090495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.954 [2024-07-25 13:14:45.090555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:52.954 [2024-07-25 13:14:45.090573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.613 ms 00:18:52.954 [2024-07-25 13:14:45.090585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.954 [2024-07-25 13:14:45.091457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.954 [2024-07-25 13:14:45.091499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:52.954 [2024-07-25 13:14:45.091515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.701 ms 00:18:52.954 [2024-07-25 13:14:45.091527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.212 [2024-07-25 13:14:45.165826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.212 [2024-07-25 13:14:45.165920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:53.212 [2024-07-25 13:14:45.165942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.262 ms 00:18:53.212 [2024-07-25 13:14:45.165955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.212 [2024-07-25 13:14:45.178991] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:53.212 [2024-07-25 13:14:45.193297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.212 [2024-07-25 13:14:45.193372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:53.212 [2024-07-25 13:14:45.193393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.139 ms 00:18:53.212 [2024-07-25 13:14:45.193406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.212 [2024-07-25 13:14:45.193556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.212 [2024-07-25 13:14:45.193579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:53.212 [2024-07-25 13:14:45.193598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:53.212 [2024-07-25 13:14:45.193609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.212 [2024-07-25 13:14:45.193678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.212 [2024-07-25 13:14:45.193695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:53.212 [2024-07-25 13:14:45.193707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:53.212 [2024-07-25 13:14:45.193718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.212 [2024-07-25 13:14:45.193751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.212 [2024-07-25 13:14:45.193767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:53.212 [2024-07-25 13:14:45.193780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:53.212 [2024-07-25 13:14:45.193796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.212 [2024-07-25 13:14:45.193837] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:53.212 [2024-07-25 13:14:45.193854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.212 [2024-07-25 13:14:45.193866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:53.212 [2024-07-25 13:14:45.193877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:18:53.212 [2024-07-25 13:14:45.193888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.212 [2024-07-25 13:14:45.225467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.212 [2024-07-25 13:14:45.225545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:53.212 [2024-07-25 13:14:45.225579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.544 ms 00:18:53.212 [2024-07-25 13:14:45.225592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.212 [2024-07-25 13:14:45.225794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.212 [2024-07-25 13:14:45.225817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:53.212 [2024-07-25 13:14:45.225830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:18:53.212 [2024-07-25 13:14:45.225842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.212 [2024-07-25 13:14:45.227053] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:53.212 [2024-07-25 13:14:45.231460] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 348.379 ms, result 0 00:18:53.212 [2024-07-25 13:14:45.232263] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:53.213 [2024-07-25 13:14:45.248963] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:02.326  Copying: 27/256 [MB] (27 MBps) Copying: 55/256 [MB] (27 MBps) Copying: 84/256 [MB] (28 MBps) Copying: 112/256 [MB] (28 MBps) Copying: 141/256 [MB] (28 MBps) Copying: 168/256 [MB] (27 MBps) Copying: 195/256 [MB] (26 MBps) Copying: 222/256 [MB] (26 MBps) Copying: 249/256 [MB] (27 MBps) Copying: 256/256 [MB] (average 27 MBps)[2024-07-25 13:14:54.500096] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:02.326 [2024-07-25 13:14:54.512362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.326 [2024-07-25 13:14:54.512419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:02.326 [2024-07-25 13:14:54.512440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:02.326 [2024-07-25 13:14:54.512452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.326 [2024-07-25 13:14:54.512490] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:02.636 [2024-07-25 13:14:54.515784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.636 [2024-07-25 13:14:54.515830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:02.636 [2024-07-25 13:14:54.515846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.271 ms 00:19:02.636 [2024-07-25 13:14:54.515858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.636 [2024-07-25 13:14:54.517578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.636 [2024-07-25 13:14:54.517624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:02.636 [2024-07-25 13:14:54.517641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.663 ms 00:19:02.636 [2024-07-25 13:14:54.517653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.636 [2024-07-25 13:14:54.524722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.636 [2024-07-25 13:14:54.524766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:02.636 [2024-07-25 13:14:54.524783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.044 ms 00:19:02.636 [2024-07-25 13:14:54.524802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.636 [2024-07-25 13:14:54.532349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.636 [2024-07-25 13:14:54.532389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:02.636 [2024-07-25 13:14:54.532404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.480 ms 00:19:02.636 [2024-07-25 13:14:54.532416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.636 [2024-07-25 13:14:54.563637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.636 [2024-07-25 13:14:54.563703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:02.636 [2024-07-25 13:14:54.563722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.161 ms 00:19:02.636 [2024-07-25 13:14:54.563733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.636 [2024-07-25 13:14:54.581483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.636 [2024-07-25 13:14:54.581543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:02.636 [2024-07-25 13:14:54.581562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.667 ms 00:19:02.636 [2024-07-25 13:14:54.581574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.636 [2024-07-25 13:14:54.581756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.636 [2024-07-25 13:14:54.581775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:02.636 [2024-07-25 13:14:54.581788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:19:02.636 [2024-07-25 13:14:54.581799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.636 [2024-07-25 13:14:54.613397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.636 [2024-07-25 13:14:54.613474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:02.636 [2024-07-25 13:14:54.613495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.571 ms 00:19:02.636 [2024-07-25 13:14:54.613507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.636 [2024-07-25 13:14:54.645171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.636 [2024-07-25 13:14:54.645243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:02.636 [2024-07-25 13:14:54.645263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.539 ms 00:19:02.636 [2024-07-25 13:14:54.645275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.636 [2024-07-25 13:14:54.676540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.636 [2024-07-25 13:14:54.676613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:02.636 [2024-07-25 13:14:54.676633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.153 ms 00:19:02.636 [2024-07-25 13:14:54.676644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.636 [2024-07-25 13:14:54.707591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.636 [2024-07-25 13:14:54.707682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:02.636 [2024-07-25 13:14:54.707703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.811 ms 00:19:02.636 [2024-07-25 13:14:54.707714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.636 [2024-07-25 13:14:54.707817] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:02.636 [2024-07-25 13:14:54.707844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:02.636 [2024-07-25 13:14:54.707868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:02.636 [2024-07-25 13:14:54.707880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:02.636 [2024-07-25 13:14:54.707892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:02.636 [2024-07-25 13:14:54.707903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:02.636 [2024-07-25 13:14:54.707915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:02.636 [2024-07-25 13:14:54.707926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:02.636 [2024-07-25 13:14:54.707938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:02.636 [2024-07-25 13:14:54.707949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:02.636 [2024-07-25 13:14:54.707961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:02.636 [2024-07-25 13:14:54.707972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:02.636 [2024-07-25 13:14:54.707984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.707995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.708984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.709005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.709018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.709030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.709045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.709064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.709084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.709097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.709122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.709135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.709146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:02.637 [2024-07-25 13:14:54.709158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:02.638 [2024-07-25 13:14:54.709180] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:02.638 [2024-07-25 13:14:54.709191] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 953998eb-5280-4452-a782-072824cd0df1 00:19:02.638 [2024-07-25 13:14:54.709208] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:02.638 [2024-07-25 13:14:54.709219] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:02.638 [2024-07-25 13:14:54.709229] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:02.638 [2024-07-25 13:14:54.709256] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:02.638 [2024-07-25 13:14:54.709267] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:02.638 [2024-07-25 13:14:54.709277] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:02.638 [2024-07-25 13:14:54.709288] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:02.638 [2024-07-25 13:14:54.709298] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:02.638 [2024-07-25 13:14:54.709307] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:02.638 [2024-07-25 13:14:54.709318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.638 [2024-07-25 13:14:54.709329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:02.638 [2024-07-25 13:14:54.709342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.504 ms 00:19:02.638 [2024-07-25 13:14:54.709358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.638 [2024-07-25 13:14:54.725926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.638 [2024-07-25 13:14:54.725982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:02.638 [2024-07-25 13:14:54.726002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.534 ms 00:19:02.638 [2024-07-25 13:14:54.726013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.638 [2024-07-25 13:14:54.726579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.638 [2024-07-25 13:14:54.726612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:02.638 [2024-07-25 13:14:54.726636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:19:02.638 [2024-07-25 13:14:54.726647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.638 [2024-07-25 13:14:54.766465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.638 [2024-07-25 13:14:54.766538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:02.638 [2024-07-25 13:14:54.766558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.638 [2024-07-25 13:14:54.766569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.638 [2024-07-25 13:14:54.766692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.638 [2024-07-25 13:14:54.766710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:02.638 [2024-07-25 13:14:54.766727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.638 [2024-07-25 13:14:54.766737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.638 [2024-07-25 13:14:54.766804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.638 [2024-07-25 13:14:54.766823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:02.638 [2024-07-25 13:14:54.766835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.638 [2024-07-25 13:14:54.766846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.638 [2024-07-25 13:14:54.766871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.638 [2024-07-25 13:14:54.766885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:02.638 [2024-07-25 13:14:54.766896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.638 [2024-07-25 13:14:54.766912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.909 [2024-07-25 13:14:54.865349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.909 [2024-07-25 13:14:54.865420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:02.909 [2024-07-25 13:14:54.865440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.909 [2024-07-25 13:14:54.865452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.909 [2024-07-25 13:14:54.949785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.909 [2024-07-25 13:14:54.949861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:02.909 [2024-07-25 13:14:54.949894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.909 [2024-07-25 13:14:54.949905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.909 [2024-07-25 13:14:54.949997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.909 [2024-07-25 13:14:54.950015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:02.909 [2024-07-25 13:14:54.950027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.909 [2024-07-25 13:14:54.950038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.909 [2024-07-25 13:14:54.950073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.909 [2024-07-25 13:14:54.950087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:02.909 [2024-07-25 13:14:54.950099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.909 [2024-07-25 13:14:54.950136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.909 [2024-07-25 13:14:54.950270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.909 [2024-07-25 13:14:54.950289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:02.909 [2024-07-25 13:14:54.950302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.909 [2024-07-25 13:14:54.950312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.909 [2024-07-25 13:14:54.950362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.909 [2024-07-25 13:14:54.950388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:02.909 [2024-07-25 13:14:54.950401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.909 [2024-07-25 13:14:54.950411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.909 [2024-07-25 13:14:54.950466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.909 [2024-07-25 13:14:54.950481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:02.909 [2024-07-25 13:14:54.950492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.909 [2024-07-25 13:14:54.950502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.909 [2024-07-25 13:14:54.950565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.909 [2024-07-25 13:14:54.950593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:02.909 [2024-07-25 13:14:54.950613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.909 [2024-07-25 13:14:54.950628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.909 [2024-07-25 13:14:54.950811] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 438.464 ms, result 0 00:19:04.287 00:19:04.287 00:19:04.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.287 13:14:56 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=79545 00:19:04.287 13:14:56 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 79545 00:19:04.287 13:14:56 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 79545 ']' 00:19:04.287 13:14:56 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.287 13:14:56 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:04.287 13:14:56 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:04.287 13:14:56 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.287 13:14:56 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:04.287 13:14:56 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:04.287 [2024-07-25 13:14:56.292801] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:04.287 [2024-07-25 13:14:56.292980] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79545 ] 00:19:04.287 [2024-07-25 13:14:56.459852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.546 [2024-07-25 13:14:56.644337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.483 13:14:57 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:05.483 13:14:57 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:19:05.483 13:14:57 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:05.483 [2024-07-25 13:14:57.614647] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:05.483 [2024-07-25 13:14:57.614728] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:05.742 [2024-07-25 13:14:57.799080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.742 [2024-07-25 13:14:57.799158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:05.742 [2024-07-25 13:14:57.799182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:05.742 [2024-07-25 13:14:57.799200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.742 [2024-07-25 13:14:57.802854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.742 [2024-07-25 13:14:57.802906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:05.742 [2024-07-25 13:14:57.802934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.619 ms 00:19:05.742 [2024-07-25 13:14:57.802950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.742 [2024-07-25 13:14:57.803127] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:05.742 [2024-07-25 13:14:57.804086] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:05.742 [2024-07-25 13:14:57.804140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.742 [2024-07-25 13:14:57.804161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:05.742 [2024-07-25 13:14:57.804175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.049 ms 00:19:05.742 [2024-07-25 13:14:57.804192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.742 [2024-07-25 13:14:57.805711] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:05.742 [2024-07-25 13:14:57.822036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.742 [2024-07-25 13:14:57.822099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:05.742 [2024-07-25 13:14:57.822165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.318 ms 00:19:05.742 [2024-07-25 13:14:57.822180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.742 [2024-07-25 13:14:57.822371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.742 [2024-07-25 13:14:57.822409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:05.742 [2024-07-25 13:14:57.822430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:19:05.742 [2024-07-25 13:14:57.822442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.742 [2024-07-25 13:14:57.827325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.742 [2024-07-25 13:14:57.827385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:05.742 [2024-07-25 13:14:57.827415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.794 ms 00:19:05.742 [2024-07-25 13:14:57.827427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.742 [2024-07-25 13:14:57.827665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.742 [2024-07-25 13:14:57.827693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:05.742 [2024-07-25 13:14:57.827711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:19:05.742 [2024-07-25 13:14:57.827728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.742 [2024-07-25 13:14:57.827775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.742 [2024-07-25 13:14:57.827790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:05.742 [2024-07-25 13:14:57.827805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:05.742 [2024-07-25 13:14:57.827816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.742 [2024-07-25 13:14:57.827855] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:05.742 [2024-07-25 13:14:57.832289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.742 [2024-07-25 13:14:57.832331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:05.742 [2024-07-25 13:14:57.832348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.448 ms 00:19:05.742 [2024-07-25 13:14:57.832361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.742 [2024-07-25 13:14:57.832452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.742 [2024-07-25 13:14:57.832478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:05.742 [2024-07-25 13:14:57.832494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:05.742 [2024-07-25 13:14:57.832506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.742 [2024-07-25 13:14:57.832551] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:05.742 [2024-07-25 13:14:57.832581] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:05.742 [2024-07-25 13:14:57.832633] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:05.742 [2024-07-25 13:14:57.832660] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:05.742 [2024-07-25 13:14:57.832767] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:05.742 [2024-07-25 13:14:57.832792] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:05.742 [2024-07-25 13:14:57.832807] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:05.742 [2024-07-25 13:14:57.832825] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:05.742 [2024-07-25 13:14:57.832839] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:05.742 [2024-07-25 13:14:57.832854] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:05.742 [2024-07-25 13:14:57.832880] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:05.742 [2024-07-25 13:14:57.832904] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:05.742 [2024-07-25 13:14:57.832915] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:05.742 [2024-07-25 13:14:57.832932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.742 [2024-07-25 13:14:57.832943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:05.742 [2024-07-25 13:14:57.832956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:19:05.742 [2024-07-25 13:14:57.832969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.742 [2024-07-25 13:14:57.833093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.742 [2024-07-25 13:14:57.833110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:05.742 [2024-07-25 13:14:57.833146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:19:05.742 [2024-07-25 13:14:57.833162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.742 [2024-07-25 13:14:57.833286] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:05.742 [2024-07-25 13:14:57.833306] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:05.742 [2024-07-25 13:14:57.833321] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:05.742 [2024-07-25 13:14:57.833334] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:05.742 [2024-07-25 13:14:57.833352] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:05.742 [2024-07-25 13:14:57.833364] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:05.742 [2024-07-25 13:14:57.833377] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:05.742 [2024-07-25 13:14:57.833391] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:05.742 [2024-07-25 13:14:57.833417] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:05.742 [2024-07-25 13:14:57.833436] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:05.742 [2024-07-25 13:14:57.833460] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:05.742 [2024-07-25 13:14:57.833473] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:05.742 [2024-07-25 13:14:57.833486] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:05.742 [2024-07-25 13:14:57.833497] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:05.742 [2024-07-25 13:14:57.833510] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:05.742 [2024-07-25 13:14:57.833521] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:05.742 [2024-07-25 13:14:57.833534] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:05.742 [2024-07-25 13:14:57.833545] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:05.742 [2024-07-25 13:14:57.833557] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:05.742 [2024-07-25 13:14:57.833568] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:05.742 [2024-07-25 13:14:57.833582] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:05.742 [2024-07-25 13:14:57.833593] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:05.742 [2024-07-25 13:14:57.833605] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:05.742 [2024-07-25 13:14:57.833616] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:05.742 [2024-07-25 13:14:57.833633] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:05.742 [2024-07-25 13:14:57.833644] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:05.743 [2024-07-25 13:14:57.833657] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:05.743 [2024-07-25 13:14:57.833680] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:05.743 [2024-07-25 13:14:57.833696] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:05.743 [2024-07-25 13:14:57.833714] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:05.743 [2024-07-25 13:14:57.833737] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:05.743 [2024-07-25 13:14:57.833756] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:05.743 [2024-07-25 13:14:57.833779] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:05.743 [2024-07-25 13:14:57.833810] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:05.743 [2024-07-25 13:14:57.833831] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:05.743 [2024-07-25 13:14:57.833843] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:05.743 [2024-07-25 13:14:57.833856] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:05.743 [2024-07-25 13:14:57.833867] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:05.743 [2024-07-25 13:14:57.833881] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:05.743 [2024-07-25 13:14:57.833891] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:05.743 [2024-07-25 13:14:57.833906] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:05.743 [2024-07-25 13:14:57.833917] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:05.743 [2024-07-25 13:14:57.833930] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:05.743 [2024-07-25 13:14:57.833941] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:05.743 [2024-07-25 13:14:57.833955] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:05.743 [2024-07-25 13:14:57.833966] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:05.743 [2024-07-25 13:14:57.833984] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:05.743 [2024-07-25 13:14:57.834004] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:05.743 [2024-07-25 13:14:57.834028] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:05.743 [2024-07-25 13:14:57.834048] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:05.743 [2024-07-25 13:14:57.834072] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:05.743 [2024-07-25 13:14:57.834093] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:05.743 [2024-07-25 13:14:57.834121] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:05.743 [2024-07-25 13:14:57.834137] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:05.743 [2024-07-25 13:14:57.834154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:05.743 [2024-07-25 13:14:57.834168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:05.743 [2024-07-25 13:14:57.834187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:05.743 [2024-07-25 13:14:57.834199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:05.743 [2024-07-25 13:14:57.834214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:05.743 [2024-07-25 13:14:57.834226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:05.743 [2024-07-25 13:14:57.834239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:05.743 [2024-07-25 13:14:57.834251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:05.743 [2024-07-25 13:14:57.834265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:05.743 [2024-07-25 13:14:57.834277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:05.743 [2024-07-25 13:14:57.834294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:05.743 [2024-07-25 13:14:57.834315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:05.743 [2024-07-25 13:14:57.834340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:05.743 [2024-07-25 13:14:57.834358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:05.743 [2024-07-25 13:14:57.834374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:05.743 [2024-07-25 13:14:57.834386] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:05.743 [2024-07-25 13:14:57.834400] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:05.743 [2024-07-25 13:14:57.834413] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:05.743 [2024-07-25 13:14:57.834430] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:05.743 [2024-07-25 13:14:57.834449] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:05.743 [2024-07-25 13:14:57.834463] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:05.743 [2024-07-25 13:14:57.834477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.743 [2024-07-25 13:14:57.834491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:05.743 [2024-07-25 13:14:57.834504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.261 ms 00:19:05.743 [2024-07-25 13:14:57.834522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.743 [2024-07-25 13:14:57.868263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.743 [2024-07-25 13:14:57.868331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:05.743 [2024-07-25 13:14:57.868357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.656 ms 00:19:05.743 [2024-07-25 13:14:57.868372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.743 [2024-07-25 13:14:57.868565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.743 [2024-07-25 13:14:57.868590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:05.743 [2024-07-25 13:14:57.868619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:05.743 [2024-07-25 13:14:57.868632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.743 [2024-07-25 13:14:57.909740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.743 [2024-07-25 13:14:57.909848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:05.743 [2024-07-25 13:14:57.909871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.076 ms 00:19:05.743 [2024-07-25 13:14:57.909889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.743 [2024-07-25 13:14:57.910037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.743 [2024-07-25 13:14:57.910067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:05.743 [2024-07-25 13:14:57.910083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:05.743 [2024-07-25 13:14:57.910100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.743 [2024-07-25 13:14:57.910458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.743 [2024-07-25 13:14:57.910516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:05.743 [2024-07-25 13:14:57.910533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:19:05.743 [2024-07-25 13:14:57.910550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.743 [2024-07-25 13:14:57.910724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.743 [2024-07-25 13:14:57.910750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:05.743 [2024-07-25 13:14:57.910764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:19:05.743 [2024-07-25 13:14:57.910781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.002 [2024-07-25 13:14:57.930218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.002 [2024-07-25 13:14:57.930288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:06.002 [2024-07-25 13:14:57.930310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.404 ms 00:19:06.002 [2024-07-25 13:14:57.930327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.002 [2024-07-25 13:14:57.947544] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:06.002 [2024-07-25 13:14:57.947636] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:06.002 [2024-07-25 13:14:57.947665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.002 [2024-07-25 13:14:57.947699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:06.002 [2024-07-25 13:14:57.947716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.158 ms 00:19:06.002 [2024-07-25 13:14:57.947732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.002 [2024-07-25 13:14:57.977354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.002 [2024-07-25 13:14:57.977421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:06.002 [2024-07-25 13:14:57.977443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.513 ms 00:19:06.002 [2024-07-25 13:14:57.977470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.002 [2024-07-25 13:14:57.993397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.002 [2024-07-25 13:14:57.993499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:06.002 [2024-07-25 13:14:57.993538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.797 ms 00:19:06.002 [2024-07-25 13:14:57.993563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.002 [2024-07-25 13:14:58.009819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.002 [2024-07-25 13:14:58.009906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:06.002 [2024-07-25 13:14:58.009925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.099 ms 00:19:06.002 [2024-07-25 13:14:58.009942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.002 [2024-07-25 13:14:58.010852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.002 [2024-07-25 13:14:58.010900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:06.002 [2024-07-25 13:14:58.010919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.750 ms 00:19:06.002 [2024-07-25 13:14:58.010937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.002 [2024-07-25 13:14:58.099237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.002 [2024-07-25 13:14:58.099328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:06.002 [2024-07-25 13:14:58.099352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.263 ms 00:19:06.002 [2024-07-25 13:14:58.099371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.002 [2024-07-25 13:14:58.112228] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:06.002 [2024-07-25 13:14:58.126223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.002 [2024-07-25 13:14:58.126296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:06.002 [2024-07-25 13:14:58.126324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.671 ms 00:19:06.002 [2024-07-25 13:14:58.126337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.002 [2024-07-25 13:14:58.126505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.002 [2024-07-25 13:14:58.126526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:06.002 [2024-07-25 13:14:58.126542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:06.002 [2024-07-25 13:14:58.126554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.002 [2024-07-25 13:14:58.126624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.002 [2024-07-25 13:14:58.126643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:06.002 [2024-07-25 13:14:58.126658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:06.002 [2024-07-25 13:14:58.126670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.002 [2024-07-25 13:14:58.126708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.002 [2024-07-25 13:14:58.126724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:06.002 [2024-07-25 13:14:58.126739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:06.003 [2024-07-25 13:14:58.126751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.003 [2024-07-25 13:14:58.126794] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:06.003 [2024-07-25 13:14:58.126811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.003 [2024-07-25 13:14:58.126827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:06.003 [2024-07-25 13:14:58.126842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:06.003 [2024-07-25 13:14:58.126855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.003 [2024-07-25 13:14:58.158023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.003 [2024-07-25 13:14:58.158089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:06.003 [2024-07-25 13:14:58.158133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.137 ms 00:19:06.003 [2024-07-25 13:14:58.158152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.003 [2024-07-25 13:14:58.158292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.003 [2024-07-25 13:14:58.158323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:06.003 [2024-07-25 13:14:58.158337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:06.003 [2024-07-25 13:14:58.158350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.003 [2024-07-25 13:14:58.159319] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:06.003 [2024-07-25 13:14:58.163476] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 359.876 ms, result 0 00:19:06.003 [2024-07-25 13:14:58.164558] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:06.261 Some configs were skipped because the RPC state that can call them passed over. 00:19:06.262 13:14:58 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:06.520 [2024-07-25 13:14:58.502770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.520 [2024-07-25 13:14:58.503134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:06.520 [2024-07-25 13:14:58.503302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.549 ms 00:19:06.520 [2024-07-25 13:14:58.503489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.520 [2024-07-25 13:14:58.503613] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.391 ms, result 0 00:19:06.520 true 00:19:06.520 13:14:58 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:06.778 [2024-07-25 13:14:58.778633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.778 [2024-07-25 13:14:58.778905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:06.778 [2024-07-25 13:14:58.778938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.067 ms 00:19:06.778 [2024-07-25 13:14:58.778958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.778 [2024-07-25 13:14:58.779030] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.456 ms, result 0 00:19:06.778 true 00:19:06.778 13:14:58 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 79545 00:19:06.778 13:14:58 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 79545 ']' 00:19:06.778 13:14:58 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 79545 00:19:06.778 13:14:58 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:19:06.778 13:14:58 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:06.778 13:14:58 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79545 00:19:06.778 killing process with pid 79545 00:19:06.778 13:14:58 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:06.778 13:14:58 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:06.778 13:14:58 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79545' 00:19:06.778 13:14:58 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 79545 00:19:06.778 13:14:58 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 79545 00:19:07.714 [2024-07-25 13:14:59.766710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.714 [2024-07-25 13:14:59.766796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:07.714 [2024-07-25 13:14:59.766839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:07.714 [2024-07-25 13:14:59.766852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.714 [2024-07-25 13:14:59.766887] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:07.714 [2024-07-25 13:14:59.770339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.714 [2024-07-25 13:14:59.770380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:07.714 [2024-07-25 13:14:59.770414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.430 ms 00:19:07.714 [2024-07-25 13:14:59.770445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.714 [2024-07-25 13:14:59.770744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.714 [2024-07-25 13:14:59.770768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:07.714 [2024-07-25 13:14:59.770781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:19:07.714 [2024-07-25 13:14:59.770793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.714 [2024-07-25 13:14:59.775140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.714 [2024-07-25 13:14:59.775204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:07.714 [2024-07-25 13:14:59.775222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.324 ms 00:19:07.714 [2024-07-25 13:14:59.775236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.714 [2024-07-25 13:14:59.783011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.714 [2024-07-25 13:14:59.783106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:07.714 [2024-07-25 13:14:59.783163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.709 ms 00:19:07.714 [2024-07-25 13:14:59.783185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.714 [2024-07-25 13:14:59.796890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.714 [2024-07-25 13:14:59.797014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:07.714 [2024-07-25 13:14:59.797053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.567 ms 00:19:07.714 [2024-07-25 13:14:59.797070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.714 [2024-07-25 13:14:59.805952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.714 [2024-07-25 13:14:59.806003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:07.714 [2024-07-25 13:14:59.806022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.802 ms 00:19:07.714 [2024-07-25 13:14:59.806036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.714 [2024-07-25 13:14:59.806244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.714 [2024-07-25 13:14:59.806271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:07.714 [2024-07-25 13:14:59.806286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:19:07.714 [2024-07-25 13:14:59.806313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.714 [2024-07-25 13:14:59.819742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.714 [2024-07-25 13:14:59.819817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:07.714 [2024-07-25 13:14:59.819836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.402 ms 00:19:07.714 [2024-07-25 13:14:59.819854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.714 [2024-07-25 13:14:59.832645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.714 [2024-07-25 13:14:59.832698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:07.714 [2024-07-25 13:14:59.832717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.741 ms 00:19:07.714 [2024-07-25 13:14:59.832743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.714 [2024-07-25 13:14:59.845338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.714 [2024-07-25 13:14:59.845391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:07.714 [2024-07-25 13:14:59.845410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.547 ms 00:19:07.714 [2024-07-25 13:14:59.845428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.714 [2024-07-25 13:14:59.857655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.714 [2024-07-25 13:14:59.857723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:07.714 [2024-07-25 13:14:59.857743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.132 ms 00:19:07.714 [2024-07-25 13:14:59.857760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.714 [2024-07-25 13:14:59.857805] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:07.715 [2024-07-25 13:14:59.857846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.857862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.857880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.857893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.857911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.857924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.857945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.857958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.857976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.857989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.858982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.859001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.859016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.859047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.859072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.859118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.859137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.859157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.859170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.859187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.859200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.859216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.859229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.859246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.859259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.859275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.859288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.859305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.859318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.859351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.859375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:07.715 [2024-07-25 13:14:59.859398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:07.716 [2024-07-25 13:14:59.859411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:07.716 [2024-07-25 13:14:59.859428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:07.716 [2024-07-25 13:14:59.859441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:07.716 [2024-07-25 13:14:59.859458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:07.716 [2024-07-25 13:14:59.859471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:07.716 [2024-07-25 13:14:59.859491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:07.716 [2024-07-25 13:14:59.859511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:07.716 [2024-07-25 13:14:59.859528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:07.716 [2024-07-25 13:14:59.859540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:07.716 [2024-07-25 13:14:59.859561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:07.716 [2024-07-25 13:14:59.859574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:07.716 [2024-07-25 13:14:59.859602] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:07.716 [2024-07-25 13:14:59.859623] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 953998eb-5280-4452-a782-072824cd0df1 00:19:07.716 [2024-07-25 13:14:59.859662] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:07.716 [2024-07-25 13:14:59.859680] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:07.716 [2024-07-25 13:14:59.859697] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:07.716 [2024-07-25 13:14:59.859710] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:07.716 [2024-07-25 13:14:59.859725] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:07.716 [2024-07-25 13:14:59.859738] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:07.716 [2024-07-25 13:14:59.859754] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:07.716 [2024-07-25 13:14:59.859765] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:07.716 [2024-07-25 13:14:59.859799] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:07.716 [2024-07-25 13:14:59.859812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.716 [2024-07-25 13:14:59.859829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:07.716 [2024-07-25 13:14:59.859848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.009 ms 00:19:07.716 [2024-07-25 13:14:59.859865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.716 [2024-07-25 13:14:59.876567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.716 [2024-07-25 13:14:59.876636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:07.716 [2024-07-25 13:14:59.876655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.640 ms 00:19:07.716 [2024-07-25 13:14:59.876678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.716 [2024-07-25 13:14:59.877228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.716 [2024-07-25 13:14:59.877283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:07.716 [2024-07-25 13:14:59.877301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.452 ms 00:19:07.716 [2024-07-25 13:14:59.877318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.975 [2024-07-25 13:14:59.934650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:07.975 [2024-07-25 13:14:59.934744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:07.975 [2024-07-25 13:14:59.934765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:07.975 [2024-07-25 13:14:59.934783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.975 [2024-07-25 13:14:59.934926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:07.975 [2024-07-25 13:14:59.934962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:07.975 [2024-07-25 13:14:59.934975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:07.975 [2024-07-25 13:14:59.934990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.975 [2024-07-25 13:14:59.935058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:07.975 [2024-07-25 13:14:59.935086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:07.975 [2024-07-25 13:14:59.935116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:07.975 [2024-07-25 13:14:59.935177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.975 [2024-07-25 13:14:59.935207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:07.975 [2024-07-25 13:14:59.935229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:07.975 [2024-07-25 13:14:59.935248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:07.975 [2024-07-25 13:14:59.935274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.975 [2024-07-25 13:15:00.035199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:07.975 [2024-07-25 13:15:00.035302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:07.975 [2024-07-25 13:15:00.035324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:07.975 [2024-07-25 13:15:00.035343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.975 [2024-07-25 13:15:00.120631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:07.975 [2024-07-25 13:15:00.120731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:07.975 [2024-07-25 13:15:00.120753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:07.975 [2024-07-25 13:15:00.120771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.975 [2024-07-25 13:15:00.120884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:07.975 [2024-07-25 13:15:00.120913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:07.975 [2024-07-25 13:15:00.120929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:07.975 [2024-07-25 13:15:00.120951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.975 [2024-07-25 13:15:00.120989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:07.975 [2024-07-25 13:15:00.121022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:07.975 [2024-07-25 13:15:00.121038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:07.975 [2024-07-25 13:15:00.121074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.975 [2024-07-25 13:15:00.121240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:07.975 [2024-07-25 13:15:00.121273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:07.975 [2024-07-25 13:15:00.121289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:07.975 [2024-07-25 13:15:00.121306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.975 [2024-07-25 13:15:00.121368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:07.975 [2024-07-25 13:15:00.121396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:07.975 [2024-07-25 13:15:00.121411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:07.975 [2024-07-25 13:15:00.121428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.975 [2024-07-25 13:15:00.121482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:07.975 [2024-07-25 13:15:00.121513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:07.975 [2024-07-25 13:15:00.121533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:07.975 [2024-07-25 13:15:00.121566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.975 [2024-07-25 13:15:00.121648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:07.975 [2024-07-25 13:15:00.121682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:07.975 [2024-07-25 13:15:00.121697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:07.975 [2024-07-25 13:15:00.121721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.976 [2024-07-25 13:15:00.121917] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 355.177 ms, result 0 00:19:08.911 13:15:01 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:08.911 13:15:01 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:09.170 [2024-07-25 13:15:01.174886] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:09.170 [2024-07-25 13:15:01.175048] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79609 ] 00:19:09.170 [2024-07-25 13:15:01.334661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.428 [2024-07-25 13:15:01.521089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.686 [2024-07-25 13:15:01.834691] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:09.687 [2024-07-25 13:15:01.834792] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:09.946 [2024-07-25 13:15:01.997021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.946 [2024-07-25 13:15:01.997101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:09.946 [2024-07-25 13:15:01.997136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:09.946 [2024-07-25 13:15:01.997149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.946 [2024-07-25 13:15:02.000472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.946 [2024-07-25 13:15:02.000519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:09.946 [2024-07-25 13:15:02.000539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.289 ms 00:19:09.946 [2024-07-25 13:15:02.000551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.946 [2024-07-25 13:15:02.000739] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:09.946 [2024-07-25 13:15:02.001787] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:09.946 [2024-07-25 13:15:02.001832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.946 [2024-07-25 13:15:02.001848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:09.946 [2024-07-25 13:15:02.001861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.106 ms 00:19:09.946 [2024-07-25 13:15:02.001873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.946 [2024-07-25 13:15:02.003215] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:09.946 [2024-07-25 13:15:02.019772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.946 [2024-07-25 13:15:02.019859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:09.946 [2024-07-25 13:15:02.019890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.554 ms 00:19:09.946 [2024-07-25 13:15:02.019902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.946 [2024-07-25 13:15:02.020137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.946 [2024-07-25 13:15:02.020171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:09.946 [2024-07-25 13:15:02.020188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:09.946 [2024-07-25 13:15:02.020199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.946 [2024-07-25 13:15:02.024988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.946 [2024-07-25 13:15:02.025081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:09.946 [2024-07-25 13:15:02.025101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.706 ms 00:19:09.946 [2024-07-25 13:15:02.025127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.946 [2024-07-25 13:15:02.025301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.946 [2024-07-25 13:15:02.025331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:09.946 [2024-07-25 13:15:02.025353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:19:09.946 [2024-07-25 13:15:02.025372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.946 [2024-07-25 13:15:02.025429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.946 [2024-07-25 13:15:02.025446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:09.946 [2024-07-25 13:15:02.025472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:09.946 [2024-07-25 13:15:02.025489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.946 [2024-07-25 13:15:02.025529] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:09.946 [2024-07-25 13:15:02.029869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.946 [2024-07-25 13:15:02.029917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:09.946 [2024-07-25 13:15:02.029933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.351 ms 00:19:09.946 [2024-07-25 13:15:02.029945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.946 [2024-07-25 13:15:02.030079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.946 [2024-07-25 13:15:02.030124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:09.946 [2024-07-25 13:15:02.030141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:09.946 [2024-07-25 13:15:02.030152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.946 [2024-07-25 13:15:02.030195] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:09.946 [2024-07-25 13:15:02.030236] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:09.946 [2024-07-25 13:15:02.030300] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:09.946 [2024-07-25 13:15:02.030323] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:09.946 [2024-07-25 13:15:02.030446] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:09.946 [2024-07-25 13:15:02.030469] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:09.946 [2024-07-25 13:15:02.030494] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:09.946 [2024-07-25 13:15:02.030511] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:09.946 [2024-07-25 13:15:02.030524] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:09.946 [2024-07-25 13:15:02.030542] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:09.946 [2024-07-25 13:15:02.030553] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:09.946 [2024-07-25 13:15:02.030566] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:09.946 [2024-07-25 13:15:02.030583] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:09.947 [2024-07-25 13:15:02.030604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.947 [2024-07-25 13:15:02.030622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:09.947 [2024-07-25 13:15:02.030635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:19:09.947 [2024-07-25 13:15:02.030646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.947 [2024-07-25 13:15:02.030753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.947 [2024-07-25 13:15:02.030772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:09.947 [2024-07-25 13:15:02.030789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:19:09.947 [2024-07-25 13:15:02.030800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.947 [2024-07-25 13:15:02.030928] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:09.947 [2024-07-25 13:15:02.030954] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:09.947 [2024-07-25 13:15:02.030974] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:09.947 [2024-07-25 13:15:02.030986] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:09.947 [2024-07-25 13:15:02.030998] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:09.947 [2024-07-25 13:15:02.031008] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:09.947 [2024-07-25 13:15:02.031019] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:09.947 [2024-07-25 13:15:02.031029] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:09.947 [2024-07-25 13:15:02.031041] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:09.947 [2024-07-25 13:15:02.031058] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:09.947 [2024-07-25 13:15:02.031076] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:09.947 [2024-07-25 13:15:02.031095] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:09.947 [2024-07-25 13:15:02.031121] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:09.947 [2024-07-25 13:15:02.031134] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:09.947 [2024-07-25 13:15:02.031145] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:09.947 [2024-07-25 13:15:02.031155] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:09.947 [2024-07-25 13:15:02.031165] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:09.947 [2024-07-25 13:15:02.031177] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:09.947 [2024-07-25 13:15:02.031217] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:09.947 [2024-07-25 13:15:02.031232] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:09.947 [2024-07-25 13:15:02.031243] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:09.947 [2024-07-25 13:15:02.031253] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:09.947 [2024-07-25 13:15:02.031263] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:09.947 [2024-07-25 13:15:02.031273] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:09.947 [2024-07-25 13:15:02.031283] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:09.947 [2024-07-25 13:15:02.031293] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:09.947 [2024-07-25 13:15:02.031304] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:09.947 [2024-07-25 13:15:02.031316] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:09.947 [2024-07-25 13:15:02.031332] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:09.947 [2024-07-25 13:15:02.031350] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:09.947 [2024-07-25 13:15:02.031368] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:09.947 [2024-07-25 13:15:02.031392] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:09.947 [2024-07-25 13:15:02.031403] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:09.947 [2024-07-25 13:15:02.031413] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:09.947 [2024-07-25 13:15:02.031423] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:09.947 [2024-07-25 13:15:02.031432] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:09.947 [2024-07-25 13:15:02.031442] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:09.947 [2024-07-25 13:15:02.031455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:09.947 [2024-07-25 13:15:02.031473] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:09.947 [2024-07-25 13:15:02.031489] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:09.947 [2024-07-25 13:15:02.031499] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:09.947 [2024-07-25 13:15:02.031509] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:09.947 [2024-07-25 13:15:02.031519] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:09.947 [2024-07-25 13:15:02.031529] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:09.947 [2024-07-25 13:15:02.031540] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:09.947 [2024-07-25 13:15:02.031551] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:09.947 [2024-07-25 13:15:02.031561] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:09.947 [2024-07-25 13:15:02.031579] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:09.947 [2024-07-25 13:15:02.031596] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:09.947 [2024-07-25 13:15:02.031613] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:09.947 [2024-07-25 13:15:02.031632] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:09.947 [2024-07-25 13:15:02.031646] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:09.947 [2024-07-25 13:15:02.031657] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:09.947 [2024-07-25 13:15:02.031669] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:09.947 [2024-07-25 13:15:02.031683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:09.947 [2024-07-25 13:15:02.031696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:09.947 [2024-07-25 13:15:02.031707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:09.947 [2024-07-25 13:15:02.031722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:09.947 [2024-07-25 13:15:02.031742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:09.947 [2024-07-25 13:15:02.031759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:09.947 [2024-07-25 13:15:02.031777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:09.947 [2024-07-25 13:15:02.031790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:09.947 [2024-07-25 13:15:02.031801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:09.947 [2024-07-25 13:15:02.031811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:09.947 [2024-07-25 13:15:02.031823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:09.947 [2024-07-25 13:15:02.031834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:09.947 [2024-07-25 13:15:02.031845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:09.947 [2024-07-25 13:15:02.031856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:09.947 [2024-07-25 13:15:02.031868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:09.947 [2024-07-25 13:15:02.031885] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:09.947 [2024-07-25 13:15:02.031907] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:09.947 [2024-07-25 13:15:02.031923] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:09.947 [2024-07-25 13:15:02.031934] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:09.947 [2024-07-25 13:15:02.031945] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:09.947 [2024-07-25 13:15:02.031957] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:09.947 [2024-07-25 13:15:02.031979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.947 [2024-07-25 13:15:02.031991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:09.947 [2024-07-25 13:15:02.032003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.118 ms 00:19:09.947 [2024-07-25 13:15:02.032021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.947 [2024-07-25 13:15:02.071015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.947 [2024-07-25 13:15:02.071087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:09.947 [2024-07-25 13:15:02.071130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.906 ms 00:19:09.947 [2024-07-25 13:15:02.071144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.947 [2024-07-25 13:15:02.071360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.947 [2024-07-25 13:15:02.071382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:09.947 [2024-07-25 13:15:02.071408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:09.947 [2024-07-25 13:15:02.071426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.947 [2024-07-25 13:15:02.109681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.948 [2024-07-25 13:15:02.109748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:09.948 [2024-07-25 13:15:02.109770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.205 ms 00:19:09.948 [2024-07-25 13:15:02.109782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.948 [2024-07-25 13:15:02.109952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.948 [2024-07-25 13:15:02.109973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:09.948 [2024-07-25 13:15:02.109986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:09.948 [2024-07-25 13:15:02.109997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.948 [2024-07-25 13:15:02.110391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.948 [2024-07-25 13:15:02.110412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:09.948 [2024-07-25 13:15:02.110426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:19:09.948 [2024-07-25 13:15:02.110437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.948 [2024-07-25 13:15:02.110638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.948 [2024-07-25 13:15:02.110660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:09.948 [2024-07-25 13:15:02.110674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:19:09.948 [2024-07-25 13:15:02.110692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.948 [2024-07-25 13:15:02.127003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.948 [2024-07-25 13:15:02.127070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:09.948 [2024-07-25 13:15:02.127090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.273 ms 00:19:09.948 [2024-07-25 13:15:02.127116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.207 [2024-07-25 13:15:02.143566] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:10.207 [2024-07-25 13:15:02.143643] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:10.207 [2024-07-25 13:15:02.143667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.207 [2024-07-25 13:15:02.143680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:10.207 [2024-07-25 13:15:02.143695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.349 ms 00:19:10.207 [2024-07-25 13:15:02.143707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.207 [2024-07-25 13:15:02.174103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.207 [2024-07-25 13:15:02.174207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:10.207 [2024-07-25 13:15:02.174230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.225 ms 00:19:10.207 [2024-07-25 13:15:02.174243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.207 [2024-07-25 13:15:02.190653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.207 [2024-07-25 13:15:02.190733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:10.207 [2024-07-25 13:15:02.190755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.210 ms 00:19:10.207 [2024-07-25 13:15:02.190766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.207 [2024-07-25 13:15:02.206727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.207 [2024-07-25 13:15:02.206806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:10.207 [2024-07-25 13:15:02.206827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.802 ms 00:19:10.207 [2024-07-25 13:15:02.206838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.207 [2024-07-25 13:15:02.207780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.207 [2024-07-25 13:15:02.207820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:10.207 [2024-07-25 13:15:02.207837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.740 ms 00:19:10.207 [2024-07-25 13:15:02.207849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.207 [2024-07-25 13:15:02.282233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.207 [2024-07-25 13:15:02.282323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:10.207 [2024-07-25 13:15:02.282346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.343 ms 00:19:10.207 [2024-07-25 13:15:02.282359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.207 [2024-07-25 13:15:02.295387] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:10.207 [2024-07-25 13:15:02.309394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.207 [2024-07-25 13:15:02.309467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:10.207 [2024-07-25 13:15:02.309489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.855 ms 00:19:10.207 [2024-07-25 13:15:02.309501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.207 [2024-07-25 13:15:02.309670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.207 [2024-07-25 13:15:02.309691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:10.207 [2024-07-25 13:15:02.309704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:10.207 [2024-07-25 13:15:02.309715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.207 [2024-07-25 13:15:02.309782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.207 [2024-07-25 13:15:02.309798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:10.207 [2024-07-25 13:15:02.309810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:10.207 [2024-07-25 13:15:02.309821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.207 [2024-07-25 13:15:02.309854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.207 [2024-07-25 13:15:02.309873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:10.207 [2024-07-25 13:15:02.309885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:10.207 [2024-07-25 13:15:02.309896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.207 [2024-07-25 13:15:02.309934] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:10.207 [2024-07-25 13:15:02.309950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.207 [2024-07-25 13:15:02.309961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:10.207 [2024-07-25 13:15:02.309973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:10.207 [2024-07-25 13:15:02.309984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.207 [2024-07-25 13:15:02.341882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.207 [2024-07-25 13:15:02.341983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:10.207 [2024-07-25 13:15:02.342004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.861 ms 00:19:10.207 [2024-07-25 13:15:02.342016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.207 [2024-07-25 13:15:02.342247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.207 [2024-07-25 13:15:02.342269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:10.207 [2024-07-25 13:15:02.342283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:19:10.207 [2024-07-25 13:15:02.342294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.207 [2024-07-25 13:15:02.343303] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:10.207 [2024-07-25 13:15:02.347724] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 345.926 ms, result 0 00:19:10.207 [2024-07-25 13:15:02.348573] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:10.207 [2024-07-25 13:15:02.365324] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:20.511  Copying: 28/256 [MB] (28 MBps) Copying: 52/256 [MB] (24 MBps) Copying: 77/256 [MB] (24 MBps) Copying: 101/256 [MB] (24 MBps) Copying: 126/256 [MB] (25 MBps) Copying: 151/256 [MB] (25 MBps) Copying: 177/256 [MB] (25 MBps) Copying: 204/256 [MB] (26 MBps) Copying: 227/256 [MB] (23 MBps) Copying: 252/256 [MB] (24 MBps) Copying: 256/256 [MB] (average 25 MBps)[2024-07-25 13:15:12.516568] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:20.511 [2024-07-25 13:15:12.531824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.511 [2024-07-25 13:15:12.531896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:20.511 [2024-07-25 13:15:12.531920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:20.511 [2024-07-25 13:15:12.531935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.511 [2024-07-25 13:15:12.531991] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:20.511 [2024-07-25 13:15:12.535999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.511 [2024-07-25 13:15:12.536040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:20.511 [2024-07-25 13:15:12.536059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.979 ms 00:19:20.511 [2024-07-25 13:15:12.536072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.511 [2024-07-25 13:15:12.536475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.511 [2024-07-25 13:15:12.536505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:20.511 [2024-07-25 13:15:12.536522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:19:20.511 [2024-07-25 13:15:12.536535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.511 [2024-07-25 13:15:12.541176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.511 [2024-07-25 13:15:12.541222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:20.511 [2024-07-25 13:15:12.541268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.613 ms 00:19:20.511 [2024-07-25 13:15:12.541285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.511 [2024-07-25 13:15:12.550508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.511 [2024-07-25 13:15:12.550564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:20.511 [2024-07-25 13:15:12.550583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.180 ms 00:19:20.511 [2024-07-25 13:15:12.550597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.511 [2024-07-25 13:15:12.590550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.511 [2024-07-25 13:15:12.590629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:20.511 [2024-07-25 13:15:12.590652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.822 ms 00:19:20.511 [2024-07-25 13:15:12.590665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.511 [2024-07-25 13:15:12.614239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.511 [2024-07-25 13:15:12.614327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:20.511 [2024-07-25 13:15:12.614350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.462 ms 00:19:20.511 [2024-07-25 13:15:12.614378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.511 [2024-07-25 13:15:12.614657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.511 [2024-07-25 13:15:12.614684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:20.511 [2024-07-25 13:15:12.614700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:19:20.511 [2024-07-25 13:15:12.614713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.511 [2024-07-25 13:15:12.669505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.511 [2024-07-25 13:15:12.669636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:20.511 [2024-07-25 13:15:12.669678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.749 ms 00:19:20.511 [2024-07-25 13:15:12.669704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.770 [2024-07-25 13:15:12.728904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.770 [2024-07-25 13:15:12.729044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:20.770 [2024-07-25 13:15:12.729085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.034 ms 00:19:20.770 [2024-07-25 13:15:12.729134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.770 [2024-07-25 13:15:12.784681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.770 [2024-07-25 13:15:12.784798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:20.770 [2024-07-25 13:15:12.784835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.393 ms 00:19:20.770 [2024-07-25 13:15:12.784857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.770 [2024-07-25 13:15:12.825757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.770 [2024-07-25 13:15:12.825831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:20.770 [2024-07-25 13:15:12.825852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.680 ms 00:19:20.770 [2024-07-25 13:15:12.825864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.770 [2024-07-25 13:15:12.825950] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:20.770 [2024-07-25 13:15:12.825989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:20.770 [2024-07-25 13:15:12.826003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:20.770 [2024-07-25 13:15:12.826015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:20.770 [2024-07-25 13:15:12.826027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:20.770 [2024-07-25 13:15:12.826039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:20.770 [2024-07-25 13:15:12.826050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:20.770 [2024-07-25 13:15:12.826061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:20.770 [2024-07-25 13:15:12.826073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:20.770 [2024-07-25 13:15:12.826085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:20.770 [2024-07-25 13:15:12.826097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:20.770 [2024-07-25 13:15:12.826123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:20.770 [2024-07-25 13:15:12.826137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:20.770 [2024-07-25 13:15:12.826150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:20.770 [2024-07-25 13:15:12.826161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:20.770 [2024-07-25 13:15:12.826173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:20.770 [2024-07-25 13:15:12.826184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:20.770 [2024-07-25 13:15:12.826196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:20.770 [2024-07-25 13:15:12.826208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:20.770 [2024-07-25 13:15:12.826219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:20.770 [2024-07-25 13:15:12.826231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:20.770 [2024-07-25 13:15:12.826242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:20.770 [2024-07-25 13:15:12.826254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.826998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.827010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.827021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.827033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.827044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.827056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.827068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.827080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.827091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.827113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.827127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.827139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.827151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.827163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.827175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:20.771 [2024-07-25 13:15:12.827196] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:20.771 [2024-07-25 13:15:12.827207] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 953998eb-5280-4452-a782-072824cd0df1 00:19:20.771 [2024-07-25 13:15:12.827219] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:20.771 [2024-07-25 13:15:12.827230] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:20.771 [2024-07-25 13:15:12.827256] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:20.771 [2024-07-25 13:15:12.827268] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:20.771 [2024-07-25 13:15:12.827279] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:20.771 [2024-07-25 13:15:12.827290] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:20.771 [2024-07-25 13:15:12.827301] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:20.771 [2024-07-25 13:15:12.827310] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:20.771 [2024-07-25 13:15:12.827320] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:20.771 [2024-07-25 13:15:12.827332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.771 [2024-07-25 13:15:12.827343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:20.771 [2024-07-25 13:15:12.827360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.384 ms 00:19:20.772 [2024-07-25 13:15:12.827370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.772 [2024-07-25 13:15:12.844007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.772 [2024-07-25 13:15:12.844067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:20.772 [2024-07-25 13:15:12.844087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.604 ms 00:19:20.772 [2024-07-25 13:15:12.844099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.772 [2024-07-25 13:15:12.844597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.772 [2024-07-25 13:15:12.844630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:20.772 [2024-07-25 13:15:12.844645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:19:20.772 [2024-07-25 13:15:12.844657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.772 [2024-07-25 13:15:12.884560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.772 [2024-07-25 13:15:12.884631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:20.772 [2024-07-25 13:15:12.884652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.772 [2024-07-25 13:15:12.884663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.772 [2024-07-25 13:15:12.884829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.772 [2024-07-25 13:15:12.884855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:20.772 [2024-07-25 13:15:12.884867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.772 [2024-07-25 13:15:12.884879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.772 [2024-07-25 13:15:12.884948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.772 [2024-07-25 13:15:12.884967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:20.772 [2024-07-25 13:15:12.884980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.772 [2024-07-25 13:15:12.884991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.772 [2024-07-25 13:15:12.885029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.772 [2024-07-25 13:15:12.885044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:20.772 [2024-07-25 13:15:12.885062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.772 [2024-07-25 13:15:12.885074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.031 [2024-07-25 13:15:12.983887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.031 [2024-07-25 13:15:12.983963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:21.031 [2024-07-25 13:15:12.983982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.031 [2024-07-25 13:15:12.983993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.031 [2024-07-25 13:15:13.070046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.031 [2024-07-25 13:15:13.070163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:21.031 [2024-07-25 13:15:13.070187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.031 [2024-07-25 13:15:13.070199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.031 [2024-07-25 13:15:13.070313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.031 [2024-07-25 13:15:13.070332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:21.031 [2024-07-25 13:15:13.070345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.031 [2024-07-25 13:15:13.070355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.031 [2024-07-25 13:15:13.070391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.031 [2024-07-25 13:15:13.070404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:21.031 [2024-07-25 13:15:13.070416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.031 [2024-07-25 13:15:13.070434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.031 [2024-07-25 13:15:13.070556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.031 [2024-07-25 13:15:13.070582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:21.031 [2024-07-25 13:15:13.070596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.031 [2024-07-25 13:15:13.070607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.031 [2024-07-25 13:15:13.070658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.031 [2024-07-25 13:15:13.070676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:21.031 [2024-07-25 13:15:13.070688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.031 [2024-07-25 13:15:13.070699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.031 [2024-07-25 13:15:13.070752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.031 [2024-07-25 13:15:13.070767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:21.031 [2024-07-25 13:15:13.070778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.031 [2024-07-25 13:15:13.070789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.031 [2024-07-25 13:15:13.070845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.031 [2024-07-25 13:15:13.070861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:21.031 [2024-07-25 13:15:13.070873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.031 [2024-07-25 13:15:13.070889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.031 [2024-07-25 13:15:13.071061] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 539.277 ms, result 0 00:19:21.966 00:19:21.966 00:19:21.966 13:15:14 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:19:21.966 13:15:14 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:22.901 13:15:14 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:22.901 [2024-07-25 13:15:14.857179] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:22.901 [2024-07-25 13:15:14.857332] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79747 ] 00:19:22.901 [2024-07-25 13:15:15.017620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.159 [2024-07-25 13:15:15.207562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.417 [2024-07-25 13:15:15.515815] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:23.417 [2024-07-25 13:15:15.515898] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:23.677 [2024-07-25 13:15:15.678851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.677 [2024-07-25 13:15:15.678931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:23.677 [2024-07-25 13:15:15.678953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:23.677 [2024-07-25 13:15:15.678965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.677 [2024-07-25 13:15:15.682400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.677 [2024-07-25 13:15:15.682453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:23.678 [2024-07-25 13:15:15.682472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.402 ms 00:19:23.678 [2024-07-25 13:15:15.682483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.678 [2024-07-25 13:15:15.682694] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:23.678 [2024-07-25 13:15:15.683748] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:23.678 [2024-07-25 13:15:15.683789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.678 [2024-07-25 13:15:15.683804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:23.678 [2024-07-25 13:15:15.683816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.109 ms 00:19:23.678 [2024-07-25 13:15:15.683827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.678 [2024-07-25 13:15:15.685203] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:23.678 [2024-07-25 13:15:15.702587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.678 [2024-07-25 13:15:15.702670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:23.678 [2024-07-25 13:15:15.702703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.379 ms 00:19:23.678 [2024-07-25 13:15:15.702716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.678 [2024-07-25 13:15:15.702935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.678 [2024-07-25 13:15:15.702958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:23.678 [2024-07-25 13:15:15.702972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:23.678 [2024-07-25 13:15:15.702983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.678 [2024-07-25 13:15:15.707802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.678 [2024-07-25 13:15:15.707865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:23.678 [2024-07-25 13:15:15.707883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.751 ms 00:19:23.678 [2024-07-25 13:15:15.707895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.678 [2024-07-25 13:15:15.708075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.678 [2024-07-25 13:15:15.708098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:23.678 [2024-07-25 13:15:15.708148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:19:23.678 [2024-07-25 13:15:15.708161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.678 [2024-07-25 13:15:15.708209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.678 [2024-07-25 13:15:15.708225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:23.678 [2024-07-25 13:15:15.708241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:23.678 [2024-07-25 13:15:15.708252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.678 [2024-07-25 13:15:15.708286] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:23.678 [2024-07-25 13:15:15.712726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.678 [2024-07-25 13:15:15.712788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:23.678 [2024-07-25 13:15:15.712805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.449 ms 00:19:23.678 [2024-07-25 13:15:15.712817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.678 [2024-07-25 13:15:15.712944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.678 [2024-07-25 13:15:15.712964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:23.678 [2024-07-25 13:15:15.712977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:23.678 [2024-07-25 13:15:15.712987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.678 [2024-07-25 13:15:15.713038] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:23.678 [2024-07-25 13:15:15.713071] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:23.678 [2024-07-25 13:15:15.713142] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:23.678 [2024-07-25 13:15:15.713168] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:23.678 [2024-07-25 13:15:15.713275] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:23.678 [2024-07-25 13:15:15.713290] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:23.678 [2024-07-25 13:15:15.713312] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:23.678 [2024-07-25 13:15:15.713328] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:23.678 [2024-07-25 13:15:15.713342] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:23.678 [2024-07-25 13:15:15.713358] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:23.678 [2024-07-25 13:15:15.713370] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:23.678 [2024-07-25 13:15:15.713380] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:23.678 [2024-07-25 13:15:15.713391] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:23.678 [2024-07-25 13:15:15.713402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.678 [2024-07-25 13:15:15.713414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:23.678 [2024-07-25 13:15:15.713425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.369 ms 00:19:23.678 [2024-07-25 13:15:15.713436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.678 [2024-07-25 13:15:15.713535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.678 [2024-07-25 13:15:15.713551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:23.678 [2024-07-25 13:15:15.713568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:23.678 [2024-07-25 13:15:15.713579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.678 [2024-07-25 13:15:15.713690] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:23.678 [2024-07-25 13:15:15.713707] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:23.678 [2024-07-25 13:15:15.713719] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:23.678 [2024-07-25 13:15:15.713731] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.678 [2024-07-25 13:15:15.713744] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:23.678 [2024-07-25 13:15:15.713754] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:23.678 [2024-07-25 13:15:15.713765] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:23.678 [2024-07-25 13:15:15.713775] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:23.678 [2024-07-25 13:15:15.713786] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:23.678 [2024-07-25 13:15:15.713796] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:23.678 [2024-07-25 13:15:15.713806] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:23.678 [2024-07-25 13:15:15.713816] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:23.678 [2024-07-25 13:15:15.713826] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:23.678 [2024-07-25 13:15:15.713837] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:23.678 [2024-07-25 13:15:15.713853] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:23.678 [2024-07-25 13:15:15.713871] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.678 [2024-07-25 13:15:15.713889] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:23.678 [2024-07-25 13:15:15.713907] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:23.678 [2024-07-25 13:15:15.713945] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.678 [2024-07-25 13:15:15.713967] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:23.678 [2024-07-25 13:15:15.713986] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:23.678 [2024-07-25 13:15:15.714006] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:23.678 [2024-07-25 13:15:15.714025] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:23.678 [2024-07-25 13:15:15.714042] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:23.678 [2024-07-25 13:15:15.714061] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:23.678 [2024-07-25 13:15:15.714080] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:23.678 [2024-07-25 13:15:15.714099] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:23.678 [2024-07-25 13:15:15.714137] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:23.678 [2024-07-25 13:15:15.714156] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:23.678 [2024-07-25 13:15:15.714175] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:23.678 [2024-07-25 13:15:15.714194] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:23.678 [2024-07-25 13:15:15.714210] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:23.678 [2024-07-25 13:15:15.714221] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:23.678 [2024-07-25 13:15:15.714231] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:23.678 [2024-07-25 13:15:15.714241] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:23.678 [2024-07-25 13:15:15.714251] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:23.678 [2024-07-25 13:15:15.714263] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:23.678 [2024-07-25 13:15:15.714273] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:23.678 [2024-07-25 13:15:15.714283] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:23.678 [2024-07-25 13:15:15.714293] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.678 [2024-07-25 13:15:15.714303] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:23.678 [2024-07-25 13:15:15.714313] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:23.678 [2024-07-25 13:15:15.714323] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.678 [2024-07-25 13:15:15.714333] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:23.679 [2024-07-25 13:15:15.714344] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:23.679 [2024-07-25 13:15:15.714355] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:23.679 [2024-07-25 13:15:15.714366] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.679 [2024-07-25 13:15:15.714384] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:23.679 [2024-07-25 13:15:15.714394] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:23.679 [2024-07-25 13:15:15.714404] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:23.679 [2024-07-25 13:15:15.714414] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:23.679 [2024-07-25 13:15:15.714424] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:23.679 [2024-07-25 13:15:15.714434] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:23.679 [2024-07-25 13:15:15.714446] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:23.679 [2024-07-25 13:15:15.714460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:23.679 [2024-07-25 13:15:15.714472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:23.679 [2024-07-25 13:15:15.714484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:23.679 [2024-07-25 13:15:15.714495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:23.679 [2024-07-25 13:15:15.714506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:23.679 [2024-07-25 13:15:15.714517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:23.679 [2024-07-25 13:15:15.714528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:23.679 [2024-07-25 13:15:15.714539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:23.679 [2024-07-25 13:15:15.714554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:23.679 [2024-07-25 13:15:15.714565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:23.679 [2024-07-25 13:15:15.714576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:23.679 [2024-07-25 13:15:15.714587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:23.679 [2024-07-25 13:15:15.714597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:23.679 [2024-07-25 13:15:15.714609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:23.679 [2024-07-25 13:15:15.714620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:23.679 [2024-07-25 13:15:15.714631] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:23.679 [2024-07-25 13:15:15.714643] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:23.679 [2024-07-25 13:15:15.714655] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:23.679 [2024-07-25 13:15:15.714666] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:23.679 [2024-07-25 13:15:15.714677] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:23.679 [2024-07-25 13:15:15.714688] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:23.679 [2024-07-25 13:15:15.714701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.679 [2024-07-25 13:15:15.714713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:23.679 [2024-07-25 13:15:15.714724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.078 ms 00:19:23.679 [2024-07-25 13:15:15.714735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.679 [2024-07-25 13:15:15.754990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.679 [2024-07-25 13:15:15.755055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:23.679 [2024-07-25 13:15:15.755082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.172 ms 00:19:23.679 [2024-07-25 13:15:15.755094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.679 [2024-07-25 13:15:15.755342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.679 [2024-07-25 13:15:15.755374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:23.679 [2024-07-25 13:15:15.755409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:23.679 [2024-07-25 13:15:15.755430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.679 [2024-07-25 13:15:15.794684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.679 [2024-07-25 13:15:15.794752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:23.679 [2024-07-25 13:15:15.794772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.204 ms 00:19:23.679 [2024-07-25 13:15:15.794784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.679 [2024-07-25 13:15:15.794959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.679 [2024-07-25 13:15:15.794979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:23.679 [2024-07-25 13:15:15.794993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:23.679 [2024-07-25 13:15:15.795004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.679 [2024-07-25 13:15:15.795347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.679 [2024-07-25 13:15:15.795371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:23.679 [2024-07-25 13:15:15.795386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:19:23.679 [2024-07-25 13:15:15.795397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.679 [2024-07-25 13:15:15.795560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.679 [2024-07-25 13:15:15.795581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:23.679 [2024-07-25 13:15:15.795593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:19:23.679 [2024-07-25 13:15:15.795605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.679 [2024-07-25 13:15:15.813161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.679 [2024-07-25 13:15:15.813224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:23.679 [2024-07-25 13:15:15.813244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.523 ms 00:19:23.679 [2024-07-25 13:15:15.813256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.679 [2024-07-25 13:15:15.830070] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:23.679 [2024-07-25 13:15:15.830164] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:23.679 [2024-07-25 13:15:15.830188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.679 [2024-07-25 13:15:15.830201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:23.679 [2024-07-25 13:15:15.830217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.683 ms 00:19:23.679 [2024-07-25 13:15:15.830227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.679 [2024-07-25 13:15:15.863067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.679 [2024-07-25 13:15:15.863162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:23.679 [2024-07-25 13:15:15.863182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.652 ms 00:19:23.679 [2024-07-25 13:15:15.863195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.937 [2024-07-25 13:15:15.879867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.937 [2024-07-25 13:15:15.879941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:23.937 [2024-07-25 13:15:15.879960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.490 ms 00:19:23.937 [2024-07-25 13:15:15.879973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.937 [2024-07-25 13:15:15.896031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.937 [2024-07-25 13:15:15.896099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:23.937 [2024-07-25 13:15:15.896129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.882 ms 00:19:23.937 [2024-07-25 13:15:15.896141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.937 [2024-07-25 13:15:15.897076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.937 [2024-07-25 13:15:15.897128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:23.937 [2024-07-25 13:15:15.897146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.703 ms 00:19:23.937 [2024-07-25 13:15:15.897158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.937 [2024-07-25 13:15:15.984638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.938 [2024-07-25 13:15:15.984712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:23.938 [2024-07-25 13:15:15.984733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.444 ms 00:19:23.938 [2024-07-25 13:15:15.984745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.938 [2024-07-25 13:15:15.998440] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:23.938 [2024-07-25 13:15:16.012713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.938 [2024-07-25 13:15:16.012773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:23.938 [2024-07-25 13:15:16.012793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.788 ms 00:19:23.938 [2024-07-25 13:15:16.012805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.938 [2024-07-25 13:15:16.012989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.938 [2024-07-25 13:15:16.013028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:23.938 [2024-07-25 13:15:16.013043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:23.938 [2024-07-25 13:15:16.013055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.938 [2024-07-25 13:15:16.013141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.938 [2024-07-25 13:15:16.013161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:23.938 [2024-07-25 13:15:16.013174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:23.938 [2024-07-25 13:15:16.013185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.938 [2024-07-25 13:15:16.013221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.938 [2024-07-25 13:15:16.013242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:23.938 [2024-07-25 13:15:16.013254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:23.938 [2024-07-25 13:15:16.013265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.938 [2024-07-25 13:15:16.013304] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:23.938 [2024-07-25 13:15:16.013321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.938 [2024-07-25 13:15:16.013332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:23.938 [2024-07-25 13:15:16.013344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:23.938 [2024-07-25 13:15:16.013355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.938 [2024-07-25 13:15:16.045736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.938 [2024-07-25 13:15:16.045817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:23.938 [2024-07-25 13:15:16.045853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.346 ms 00:19:23.938 [2024-07-25 13:15:16.045865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.938 [2024-07-25 13:15:16.046037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.938 [2024-07-25 13:15:16.046058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:23.938 [2024-07-25 13:15:16.046072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:19:23.938 [2024-07-25 13:15:16.046084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.938 [2024-07-25 13:15:16.047083] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:23.938 [2024-07-25 13:15:16.051369] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 367.910 ms, result 0 00:19:23.938 [2024-07-25 13:15:16.052365] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:23.938 [2024-07-25 13:15:16.069170] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:24.196  Copying: 4096/4096 [kB] (average 23 MBps)[2024-07-25 13:15:16.241999] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:24.196 [2024-07-25 13:15:16.254594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.197 [2024-07-25 13:15:16.254647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:24.197 [2024-07-25 13:15:16.254667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:24.197 [2024-07-25 13:15:16.254678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.197 [2024-07-25 13:15:16.254719] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:24.197 [2024-07-25 13:15:16.258081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.197 [2024-07-25 13:15:16.258123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:24.197 [2024-07-25 13:15:16.258139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.341 ms 00:19:24.197 [2024-07-25 13:15:16.258151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.197 [2024-07-25 13:15:16.260263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.197 [2024-07-25 13:15:16.260300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:24.197 [2024-07-25 13:15:16.260316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.080 ms 00:19:24.197 [2024-07-25 13:15:16.260327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.197 [2024-07-25 13:15:16.264332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.197 [2024-07-25 13:15:16.264367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:24.197 [2024-07-25 13:15:16.264390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.980 ms 00:19:24.197 [2024-07-25 13:15:16.264401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.197 [2024-07-25 13:15:16.272091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.197 [2024-07-25 13:15:16.272146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:24.197 [2024-07-25 13:15:16.272161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.648 ms 00:19:24.197 [2024-07-25 13:15:16.272173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.197 [2024-07-25 13:15:16.304292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.197 [2024-07-25 13:15:16.304362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:24.197 [2024-07-25 13:15:16.304381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.036 ms 00:19:24.197 [2024-07-25 13:15:16.304393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.197 [2024-07-25 13:15:16.322391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.197 [2024-07-25 13:15:16.322446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:24.197 [2024-07-25 13:15:16.322465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.908 ms 00:19:24.197 [2024-07-25 13:15:16.322491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.197 [2024-07-25 13:15:16.322736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.197 [2024-07-25 13:15:16.322758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:24.197 [2024-07-25 13:15:16.322772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:19:24.197 [2024-07-25 13:15:16.322783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.197 [2024-07-25 13:15:16.355148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.197 [2024-07-25 13:15:16.355250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:24.197 [2024-07-25 13:15:16.355269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.337 ms 00:19:24.197 [2024-07-25 13:15:16.355280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.456 [2024-07-25 13:15:16.386965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.456 [2024-07-25 13:15:16.387053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:24.456 [2024-07-25 13:15:16.387072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.553 ms 00:19:24.456 [2024-07-25 13:15:16.387082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.456 [2024-07-25 13:15:16.419188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.456 [2024-07-25 13:15:16.419247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:24.456 [2024-07-25 13:15:16.419267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.004 ms 00:19:24.456 [2024-07-25 13:15:16.419278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.456 [2024-07-25 13:15:16.450945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.456 [2024-07-25 13:15:16.451017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:24.456 [2024-07-25 13:15:16.451036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.538 ms 00:19:24.456 [2024-07-25 13:15:16.451046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.456 [2024-07-25 13:15:16.451136] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:24.456 [2024-07-25 13:15:16.451165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:24.456 [2024-07-25 13:15:16.451180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:24.456 [2024-07-25 13:15:16.451192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:24.456 [2024-07-25 13:15:16.451203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:24.456 [2024-07-25 13:15:16.451214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:24.456 [2024-07-25 13:15:16.451225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.451994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.452006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.452017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.452029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.452041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.452052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.452063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.452075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.452086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.452098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.452110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.452133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.452146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.452158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.452169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.452181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.452192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.452204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:24.457 [2024-07-25 13:15:16.452215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:24.458 [2024-07-25 13:15:16.452227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:24.458 [2024-07-25 13:15:16.452238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:24.458 [2024-07-25 13:15:16.452249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:24.458 [2024-07-25 13:15:16.452266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:24.458 [2024-07-25 13:15:16.452278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:24.458 [2024-07-25 13:15:16.452289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:24.458 [2024-07-25 13:15:16.452301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:24.458 [2024-07-25 13:15:16.452313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:24.458 [2024-07-25 13:15:16.452324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:24.458 [2024-07-25 13:15:16.452335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:24.458 [2024-07-25 13:15:16.452347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:24.458 [2024-07-25 13:15:16.452368] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:24.458 [2024-07-25 13:15:16.452380] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 953998eb-5280-4452-a782-072824cd0df1 00:19:24.458 [2024-07-25 13:15:16.452392] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:24.458 [2024-07-25 13:15:16.452402] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:24.458 [2024-07-25 13:15:16.452431] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:24.458 [2024-07-25 13:15:16.452443] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:24.458 [2024-07-25 13:15:16.452453] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:24.458 [2024-07-25 13:15:16.452465] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:24.458 [2024-07-25 13:15:16.452476] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:24.458 [2024-07-25 13:15:16.452485] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:24.458 [2024-07-25 13:15:16.452495] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:24.458 [2024-07-25 13:15:16.452506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.458 [2024-07-25 13:15:16.452517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:24.458 [2024-07-25 13:15:16.452534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.373 ms 00:19:24.458 [2024-07-25 13:15:16.452545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.458 [2024-07-25 13:15:16.469023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.458 [2024-07-25 13:15:16.469071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:24.458 [2024-07-25 13:15:16.469088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.450 ms 00:19:24.458 [2024-07-25 13:15:16.469100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.458 [2024-07-25 13:15:16.469604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.458 [2024-07-25 13:15:16.469632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:24.458 [2024-07-25 13:15:16.469646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:19:24.458 [2024-07-25 13:15:16.469657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.458 [2024-07-25 13:15:16.509718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.458 [2024-07-25 13:15:16.509779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:24.458 [2024-07-25 13:15:16.509796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.458 [2024-07-25 13:15:16.509808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.458 [2024-07-25 13:15:16.509926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.458 [2024-07-25 13:15:16.509943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:24.458 [2024-07-25 13:15:16.509955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.458 [2024-07-25 13:15:16.509966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.458 [2024-07-25 13:15:16.510040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.458 [2024-07-25 13:15:16.510059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:24.458 [2024-07-25 13:15:16.510072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.458 [2024-07-25 13:15:16.510083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.458 [2024-07-25 13:15:16.510124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.458 [2024-07-25 13:15:16.510147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:24.458 [2024-07-25 13:15:16.510159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.458 [2024-07-25 13:15:16.510170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.458 [2024-07-25 13:15:16.608140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.458 [2024-07-25 13:15:16.608211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:24.458 [2024-07-25 13:15:16.608229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.458 [2024-07-25 13:15:16.608241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.716 [2024-07-25 13:15:16.695333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.717 [2024-07-25 13:15:16.695406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:24.717 [2024-07-25 13:15:16.695426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.717 [2024-07-25 13:15:16.695437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.717 [2024-07-25 13:15:16.695525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.717 [2024-07-25 13:15:16.695543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:24.717 [2024-07-25 13:15:16.695555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.717 [2024-07-25 13:15:16.695566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.717 [2024-07-25 13:15:16.695601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.717 [2024-07-25 13:15:16.695615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:24.717 [2024-07-25 13:15:16.695626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.717 [2024-07-25 13:15:16.695642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.717 [2024-07-25 13:15:16.695766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.717 [2024-07-25 13:15:16.695787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:24.717 [2024-07-25 13:15:16.695799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.717 [2024-07-25 13:15:16.695815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.717 [2024-07-25 13:15:16.695870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.717 [2024-07-25 13:15:16.695888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:24.717 [2024-07-25 13:15:16.695900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.717 [2024-07-25 13:15:16.695917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.717 [2024-07-25 13:15:16.695965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.717 [2024-07-25 13:15:16.695980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:24.717 [2024-07-25 13:15:16.695991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.717 [2024-07-25 13:15:16.696002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.717 [2024-07-25 13:15:16.696056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.717 [2024-07-25 13:15:16.696072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:24.717 [2024-07-25 13:15:16.696084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.717 [2024-07-25 13:15:16.696100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.717 [2024-07-25 13:15:16.696320] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 441.761 ms, result 0 00:19:25.652 00:19:25.652 00:19:25.652 13:15:17 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=79783 00:19:25.652 13:15:17 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 79783 00:19:25.652 13:15:17 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:25.652 13:15:17 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 79783 ']' 00:19:25.652 13:15:17 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.652 13:15:17 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:25.652 13:15:17 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.652 13:15:17 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:25.652 13:15:17 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:25.909 [2024-07-25 13:15:17.849510] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:25.909 [2024-07-25 13:15:17.849665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79783 ] 00:19:25.909 [2024-07-25 13:15:18.014140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.184 [2024-07-25 13:15:18.200458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.749 13:15:18 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:26.749 13:15:18 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:19:26.749 13:15:18 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:27.315 [2024-07-25 13:15:19.216960] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:27.315 [2024-07-25 13:15:19.217065] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:27.315 [2024-07-25 13:15:19.394698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.315 [2024-07-25 13:15:19.394776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:27.315 [2024-07-25 13:15:19.394799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:27.315 [2024-07-25 13:15:19.394813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.315 [2024-07-25 13:15:19.398055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.315 [2024-07-25 13:15:19.398119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:27.315 [2024-07-25 13:15:19.398140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.211 ms 00:19:27.315 [2024-07-25 13:15:19.398155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.315 [2024-07-25 13:15:19.398391] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:27.315 [2024-07-25 13:15:19.399357] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:27.315 [2024-07-25 13:15:19.399405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.315 [2024-07-25 13:15:19.399423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:27.315 [2024-07-25 13:15:19.399437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.027 ms 00:19:27.315 [2024-07-25 13:15:19.399453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.315 [2024-07-25 13:15:19.400733] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:27.315 [2024-07-25 13:15:19.417161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.315 [2024-07-25 13:15:19.417247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:27.315 [2024-07-25 13:15:19.417272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.417 ms 00:19:27.315 [2024-07-25 13:15:19.417285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.315 [2024-07-25 13:15:19.417480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.315 [2024-07-25 13:15:19.417505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:27.315 [2024-07-25 13:15:19.417522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:27.315 [2024-07-25 13:15:19.417534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.315 [2024-07-25 13:15:19.422136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.315 [2024-07-25 13:15:19.422203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:27.315 [2024-07-25 13:15:19.422232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.523 ms 00:19:27.315 [2024-07-25 13:15:19.422252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.315 [2024-07-25 13:15:19.422503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.315 [2024-07-25 13:15:19.422529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:27.315 [2024-07-25 13:15:19.422546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:19:27.315 [2024-07-25 13:15:19.422563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.315 [2024-07-25 13:15:19.422614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.315 [2024-07-25 13:15:19.422630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:27.315 [2024-07-25 13:15:19.422644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:27.315 [2024-07-25 13:15:19.422656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.315 [2024-07-25 13:15:19.422696] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:27.315 [2024-07-25 13:15:19.426981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.315 [2024-07-25 13:15:19.427028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:27.315 [2024-07-25 13:15:19.427046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.299 ms 00:19:27.315 [2024-07-25 13:15:19.427060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.315 [2024-07-25 13:15:19.427161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.315 [2024-07-25 13:15:19.427189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:27.315 [2024-07-25 13:15:19.427205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:27.315 [2024-07-25 13:15:19.427218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.315 [2024-07-25 13:15:19.427250] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:27.315 [2024-07-25 13:15:19.427280] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:27.315 [2024-07-25 13:15:19.427332] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:27.315 [2024-07-25 13:15:19.427359] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:27.315 [2024-07-25 13:15:19.427467] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:27.315 [2024-07-25 13:15:19.427492] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:27.315 [2024-07-25 13:15:19.427508] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:27.315 [2024-07-25 13:15:19.427525] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:27.315 [2024-07-25 13:15:19.427539] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:27.315 [2024-07-25 13:15:19.427553] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:27.315 [2024-07-25 13:15:19.427565] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:27.315 [2024-07-25 13:15:19.427578] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:27.315 [2024-07-25 13:15:19.427591] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:27.315 [2024-07-25 13:15:19.427607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.315 [2024-07-25 13:15:19.427619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:27.315 [2024-07-25 13:15:19.427633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.357 ms 00:19:27.315 [2024-07-25 13:15:19.427647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.315 [2024-07-25 13:15:19.427775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.315 [2024-07-25 13:15:19.427792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:27.315 [2024-07-25 13:15:19.427807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:27.315 [2024-07-25 13:15:19.427818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.315 [2024-07-25 13:15:19.427942] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:27.315 [2024-07-25 13:15:19.427961] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:27.315 [2024-07-25 13:15:19.427994] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:27.315 [2024-07-25 13:15:19.428007] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.315 [2024-07-25 13:15:19.428025] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:27.315 [2024-07-25 13:15:19.428037] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:27.315 [2024-07-25 13:15:19.428049] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:27.315 [2024-07-25 13:15:19.428061] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:27.315 [2024-07-25 13:15:19.428075] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:27.315 [2024-07-25 13:15:19.428086] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:27.315 [2024-07-25 13:15:19.428099] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:27.315 [2024-07-25 13:15:19.428128] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:27.315 [2024-07-25 13:15:19.428143] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:27.316 [2024-07-25 13:15:19.428154] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:27.316 [2024-07-25 13:15:19.428166] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:27.316 [2024-07-25 13:15:19.428177] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.316 [2024-07-25 13:15:19.428189] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:27.316 [2024-07-25 13:15:19.428200] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:27.316 [2024-07-25 13:15:19.428212] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.316 [2024-07-25 13:15:19.428223] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:27.316 [2024-07-25 13:15:19.428235] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:27.316 [2024-07-25 13:15:19.428248] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:27.316 [2024-07-25 13:15:19.428261] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:27.316 [2024-07-25 13:15:19.428272] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:27.316 [2024-07-25 13:15:19.428286] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:27.316 [2024-07-25 13:15:19.428296] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:27.316 [2024-07-25 13:15:19.428308] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:27.316 [2024-07-25 13:15:19.428331] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:27.316 [2024-07-25 13:15:19.428346] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:27.316 [2024-07-25 13:15:19.428357] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:27.316 [2024-07-25 13:15:19.428370] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:27.316 [2024-07-25 13:15:19.428381] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:27.316 [2024-07-25 13:15:19.428393] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:27.316 [2024-07-25 13:15:19.428403] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:27.316 [2024-07-25 13:15:19.428415] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:27.316 [2024-07-25 13:15:19.428426] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:27.316 [2024-07-25 13:15:19.428438] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:27.316 [2024-07-25 13:15:19.428448] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:27.316 [2024-07-25 13:15:19.428461] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:27.316 [2024-07-25 13:15:19.428471] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.316 [2024-07-25 13:15:19.428486] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:27.316 [2024-07-25 13:15:19.428497] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:27.316 [2024-07-25 13:15:19.428509] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.316 [2024-07-25 13:15:19.428519] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:27.316 [2024-07-25 13:15:19.428532] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:27.316 [2024-07-25 13:15:19.428544] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:27.316 [2024-07-25 13:15:19.428557] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.316 [2024-07-25 13:15:19.428568] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:27.316 [2024-07-25 13:15:19.428581] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:27.316 [2024-07-25 13:15:19.428591] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:27.316 [2024-07-25 13:15:19.428604] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:27.316 [2024-07-25 13:15:19.428614] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:27.316 [2024-07-25 13:15:19.428627] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:27.316 [2024-07-25 13:15:19.428640] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:27.316 [2024-07-25 13:15:19.428658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:27.316 [2024-07-25 13:15:19.428671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:27.316 [2024-07-25 13:15:19.428688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:27.316 [2024-07-25 13:15:19.428700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:27.316 [2024-07-25 13:15:19.428713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:27.316 [2024-07-25 13:15:19.428725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:27.316 [2024-07-25 13:15:19.428738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:27.316 [2024-07-25 13:15:19.428750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:27.316 [2024-07-25 13:15:19.428763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:27.316 [2024-07-25 13:15:19.428774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:27.316 [2024-07-25 13:15:19.428787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:27.316 [2024-07-25 13:15:19.428799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:27.316 [2024-07-25 13:15:19.428812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:27.316 [2024-07-25 13:15:19.428824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:27.316 [2024-07-25 13:15:19.428837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:27.316 [2024-07-25 13:15:19.428848] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:27.316 [2024-07-25 13:15:19.428864] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:27.316 [2024-07-25 13:15:19.428877] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:27.316 [2024-07-25 13:15:19.428892] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:27.316 [2024-07-25 13:15:19.428904] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:27.316 [2024-07-25 13:15:19.428917] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:27.316 [2024-07-25 13:15:19.428930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.316 [2024-07-25 13:15:19.428944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:27.316 [2024-07-25 13:15:19.428957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.059 ms 00:19:27.316 [2024-07-25 13:15:19.428973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.316 [2024-07-25 13:15:19.461789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.316 [2024-07-25 13:15:19.461863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:27.316 [2024-07-25 13:15:19.461888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.724 ms 00:19:27.316 [2024-07-25 13:15:19.461903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.316 [2024-07-25 13:15:19.462096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.316 [2024-07-25 13:15:19.462137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:27.316 [2024-07-25 13:15:19.462152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:27.316 [2024-07-25 13:15:19.462166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.316 [2024-07-25 13:15:19.500646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.316 [2024-07-25 13:15:19.500716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:27.316 [2024-07-25 13:15:19.500737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.448 ms 00:19:27.316 [2024-07-25 13:15:19.500751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.316 [2024-07-25 13:15:19.500905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.316 [2024-07-25 13:15:19.500928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:27.316 [2024-07-25 13:15:19.500943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:27.316 [2024-07-25 13:15:19.500956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.316 [2024-07-25 13:15:19.501296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.316 [2024-07-25 13:15:19.501335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:27.316 [2024-07-25 13:15:19.501350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:19:27.316 [2024-07-25 13:15:19.501363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.316 [2024-07-25 13:15:19.501516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.316 [2024-07-25 13:15:19.501538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:27.316 [2024-07-25 13:15:19.501551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:19:27.316 [2024-07-25 13:15:19.501564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.576 [2024-07-25 13:15:19.518996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.576 [2024-07-25 13:15:19.519067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:27.576 [2024-07-25 13:15:19.519088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.401 ms 00:19:27.576 [2024-07-25 13:15:19.519114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.576 [2024-07-25 13:15:19.535544] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:27.576 [2024-07-25 13:15:19.535621] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:27.576 [2024-07-25 13:15:19.535647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.576 [2024-07-25 13:15:19.535662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:27.576 [2024-07-25 13:15:19.535679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.346 ms 00:19:27.576 [2024-07-25 13:15:19.535692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.576 [2024-07-25 13:15:19.566131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.576 [2024-07-25 13:15:19.566248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:27.576 [2024-07-25 13:15:19.566273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.260 ms 00:19:27.576 [2024-07-25 13:15:19.566291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.576 [2024-07-25 13:15:19.582685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.576 [2024-07-25 13:15:19.582773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:27.576 [2024-07-25 13:15:19.582808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.215 ms 00:19:27.576 [2024-07-25 13:15:19.582827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.576 [2024-07-25 13:15:19.598789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.576 [2024-07-25 13:15:19.598858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:27.576 [2024-07-25 13:15:19.598878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.796 ms 00:19:27.576 [2024-07-25 13:15:19.598892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.576 [2024-07-25 13:15:19.599770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.576 [2024-07-25 13:15:19.599809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:27.576 [2024-07-25 13:15:19.599825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.702 ms 00:19:27.576 [2024-07-25 13:15:19.599840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.576 [2024-07-25 13:15:19.681474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.576 [2024-07-25 13:15:19.681593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:27.576 [2024-07-25 13:15:19.681616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.599 ms 00:19:27.576 [2024-07-25 13:15:19.681631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.576 [2024-07-25 13:15:19.694575] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:27.576 [2024-07-25 13:15:19.708724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.576 [2024-07-25 13:15:19.708796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:27.576 [2024-07-25 13:15:19.708822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.916 ms 00:19:27.576 [2024-07-25 13:15:19.708836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.576 [2024-07-25 13:15:19.708981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.576 [2024-07-25 13:15:19.709014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:27.576 [2024-07-25 13:15:19.709031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:27.576 [2024-07-25 13:15:19.709043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.576 [2024-07-25 13:15:19.709137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.576 [2024-07-25 13:15:19.709160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:27.576 [2024-07-25 13:15:19.709174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:19:27.576 [2024-07-25 13:15:19.709187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.576 [2024-07-25 13:15:19.709228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.576 [2024-07-25 13:15:19.709243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:27.576 [2024-07-25 13:15:19.709258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:27.576 [2024-07-25 13:15:19.709269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.576 [2024-07-25 13:15:19.709312] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:27.576 [2024-07-25 13:15:19.709329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.576 [2024-07-25 13:15:19.709345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:27.576 [2024-07-25 13:15:19.709359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:27.576 [2024-07-25 13:15:19.709372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.576 [2024-07-25 13:15:19.741263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.576 [2024-07-25 13:15:19.741325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:27.576 [2024-07-25 13:15:19.741346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.858 ms 00:19:27.576 [2024-07-25 13:15:19.741360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.576 [2024-07-25 13:15:19.741525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.576 [2024-07-25 13:15:19.741555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:27.576 [2024-07-25 13:15:19.741570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:27.576 [2024-07-25 13:15:19.741583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.576 [2024-07-25 13:15:19.742566] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:27.576 [2024-07-25 13:15:19.746844] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 347.516 ms, result 0 00:19:27.576 [2024-07-25 13:15:19.747877] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:27.834 Some configs were skipped because the RPC state that can call them passed over. 00:19:27.834 13:15:19 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:27.834 [2024-07-25 13:15:20.018064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.834 [2024-07-25 13:15:20.018346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:27.835 [2024-07-25 13:15:20.018509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.487 ms 00:19:27.835 [2024-07-25 13:15:20.018571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.835 [2024-07-25 13:15:20.018748] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.180 ms, result 0 00:19:27.835 true 00:19:28.093 13:15:20 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:28.351 [2024-07-25 13:15:20.290033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.351 [2024-07-25 13:15:20.290337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:28.351 [2024-07-25 13:15:20.290497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.066 ms 00:19:28.351 [2024-07-25 13:15:20.290558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.351 [2024-07-25 13:15:20.290789] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.818 ms, result 0 00:19:28.351 true 00:19:28.351 13:15:20 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 79783 00:19:28.351 13:15:20 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 79783 ']' 00:19:28.351 13:15:20 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 79783 00:19:28.351 13:15:20 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:19:28.351 13:15:20 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:28.351 13:15:20 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79783 00:19:28.351 killing process with pid 79783 00:19:28.351 13:15:20 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:28.351 13:15:20 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:28.351 13:15:20 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79783' 00:19:28.351 13:15:20 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 79783 00:19:28.351 13:15:20 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 79783 00:19:29.287 [2024-07-25 13:15:21.282574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.287 [2024-07-25 13:15:21.282651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:29.287 [2024-07-25 13:15:21.282679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:29.287 [2024-07-25 13:15:21.282692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.287 [2024-07-25 13:15:21.282728] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:29.287 [2024-07-25 13:15:21.286049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.287 [2024-07-25 13:15:21.286092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:29.287 [2024-07-25 13:15:21.286120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.298 ms 00:19:29.287 [2024-07-25 13:15:21.286138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.287 [2024-07-25 13:15:21.286465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.287 [2024-07-25 13:15:21.286489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:29.287 [2024-07-25 13:15:21.286503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:19:29.287 [2024-07-25 13:15:21.286517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.287 [2024-07-25 13:15:21.290610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.287 [2024-07-25 13:15:21.290663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:29.287 [2024-07-25 13:15:21.290682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.070 ms 00:19:29.287 [2024-07-25 13:15:21.290696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.287 [2024-07-25 13:15:21.298575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.287 [2024-07-25 13:15:21.298619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:29.287 [2024-07-25 13:15:21.298636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.811 ms 00:19:29.287 [2024-07-25 13:15:21.298653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.287 [2024-07-25 13:15:21.311289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.287 [2024-07-25 13:15:21.311351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:29.287 [2024-07-25 13:15:21.311372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.564 ms 00:19:29.287 [2024-07-25 13:15:21.311389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.287 [2024-07-25 13:15:21.319623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.287 [2024-07-25 13:15:21.319682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:29.288 [2024-07-25 13:15:21.319701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.184 ms 00:19:29.288 [2024-07-25 13:15:21.319716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.288 [2024-07-25 13:15:21.319881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.288 [2024-07-25 13:15:21.319906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:29.288 [2024-07-25 13:15:21.319921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:19:29.288 [2024-07-25 13:15:21.319951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.288 [2024-07-25 13:15:21.333143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.288 [2024-07-25 13:15:21.333198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:29.288 [2024-07-25 13:15:21.333217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.164 ms 00:19:29.288 [2024-07-25 13:15:21.333231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.288 [2024-07-25 13:15:21.346053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.288 [2024-07-25 13:15:21.346137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:29.288 [2024-07-25 13:15:21.346158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.771 ms 00:19:29.288 [2024-07-25 13:15:21.346180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.288 [2024-07-25 13:15:21.358647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.288 [2024-07-25 13:15:21.358707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:29.288 [2024-07-25 13:15:21.358726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.413 ms 00:19:29.288 [2024-07-25 13:15:21.358739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.288 [2024-07-25 13:15:21.371216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.288 [2024-07-25 13:15:21.371262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:29.288 [2024-07-25 13:15:21.371281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.401 ms 00:19:29.288 [2024-07-25 13:15:21.371300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.288 [2024-07-25 13:15:21.371346] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:29.288 [2024-07-25 13:15:21.371378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.371996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.372008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.372022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.372034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.372048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.372060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.372073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.372085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.372099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.372111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.372126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.372160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.372177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.372189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.372203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.372215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.372229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.372241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.372255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.372267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.372281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.372293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.372306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.372319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.372335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:29.288 [2024-07-25 13:15:21.372347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:29.289 [2024-07-25 13:15:21.372363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:29.289 [2024-07-25 13:15:21.372376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:29.289 [2024-07-25 13:15:21.372390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:29.289 [2024-07-25 13:15:21.372402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:29.289 [2024-07-25 13:15:21.372415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:29.289 [2024-07-25 13:15:21.372427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:29.289 [2024-07-25 13:15:21.372440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:29.289 [2024-07-25 13:15:21.372453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:29.289 [2024-07-25 13:15:21.372466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:29.289 [2024-07-25 13:15:21.372478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:29.289 [2024-07-25 13:15:21.372491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:29.289 [2024-07-25 13:15:21.372503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:29.289 [2024-07-25 13:15:21.372517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:29.289 [2024-07-25 13:15:21.372528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:29.289 [2024-07-25 13:15:21.372542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:29.289 [2024-07-25 13:15:21.372554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:29.289 [2024-07-25 13:15:21.372569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:29.289 [2024-07-25 13:15:21.372581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:29.289 [2024-07-25 13:15:21.372595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:29.289 [2024-07-25 13:15:21.372606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:29.289 [2024-07-25 13:15:21.372620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:29.289 [2024-07-25 13:15:21.372632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:29.289 [2024-07-25 13:15:21.372645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:29.289 [2024-07-25 13:15:21.372657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:29.289 [2024-07-25 13:15:21.372672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:29.289 [2024-07-25 13:15:21.372684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:29.289 [2024-07-25 13:15:21.372697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:29.289 [2024-07-25 13:15:21.372709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:29.289 [2024-07-25 13:15:21.372722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:29.289 [2024-07-25 13:15:21.372735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:29.289 [2024-07-25 13:15:21.372762] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:29.289 [2024-07-25 13:15:21.372775] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 953998eb-5280-4452-a782-072824cd0df1 00:19:29.289 [2024-07-25 13:15:21.372791] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:29.289 [2024-07-25 13:15:21.372803] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:29.289 [2024-07-25 13:15:21.372816] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:29.289 [2024-07-25 13:15:21.372828] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:29.289 [2024-07-25 13:15:21.372840] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:29.289 [2024-07-25 13:15:21.372852] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:29.289 [2024-07-25 13:15:21.372865] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:29.289 [2024-07-25 13:15:21.372876] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:29.289 [2024-07-25 13:15:21.372903] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:29.289 [2024-07-25 13:15:21.372916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.289 [2024-07-25 13:15:21.372929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:29.289 [2024-07-25 13:15:21.372944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.572 ms 00:19:29.289 [2024-07-25 13:15:21.372956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.289 [2024-07-25 13:15:21.389568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.289 [2024-07-25 13:15:21.389626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:29.289 [2024-07-25 13:15:21.389646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.559 ms 00:19:29.289 [2024-07-25 13:15:21.389663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.289 [2024-07-25 13:15:21.390195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.289 [2024-07-25 13:15:21.390235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:29.289 [2024-07-25 13:15:21.390252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.428 ms 00:19:29.289 [2024-07-25 13:15:21.390267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.289 [2024-07-25 13:15:21.445280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.289 [2024-07-25 13:15:21.445381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:29.289 [2024-07-25 13:15:21.445401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.289 [2024-07-25 13:15:21.445415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.289 [2024-07-25 13:15:21.445552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.289 [2024-07-25 13:15:21.445577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:29.289 [2024-07-25 13:15:21.445589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.289 [2024-07-25 13:15:21.445602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.289 [2024-07-25 13:15:21.445666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.289 [2024-07-25 13:15:21.445689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:29.289 [2024-07-25 13:15:21.445702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.289 [2024-07-25 13:15:21.445717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.289 [2024-07-25 13:15:21.445742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.289 [2024-07-25 13:15:21.445757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:29.289 [2024-07-25 13:15:21.445772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.289 [2024-07-25 13:15:21.445785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.560 [2024-07-25 13:15:21.545706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.560 [2024-07-25 13:15:21.545795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:29.560 [2024-07-25 13:15:21.545815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.560 [2024-07-25 13:15:21.545828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.560 [2024-07-25 13:15:21.629833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.560 [2024-07-25 13:15:21.629928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:29.560 [2024-07-25 13:15:21.629947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.560 [2024-07-25 13:15:21.629960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.560 [2024-07-25 13:15:21.630064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.560 [2024-07-25 13:15:21.630085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:29.560 [2024-07-25 13:15:21.630098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.560 [2024-07-25 13:15:21.630113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.560 [2024-07-25 13:15:21.630176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.560 [2024-07-25 13:15:21.630209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:29.560 [2024-07-25 13:15:21.630238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.560 [2024-07-25 13:15:21.630255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.560 [2024-07-25 13:15:21.630382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.560 [2024-07-25 13:15:21.630406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:29.560 [2024-07-25 13:15:21.630419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.560 [2024-07-25 13:15:21.630433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.560 [2024-07-25 13:15:21.630485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.560 [2024-07-25 13:15:21.630508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:29.560 [2024-07-25 13:15:21.630521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.560 [2024-07-25 13:15:21.630534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.560 [2024-07-25 13:15:21.630585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.560 [2024-07-25 13:15:21.630603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:29.560 [2024-07-25 13:15:21.630615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.560 [2024-07-25 13:15:21.630632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.560 [2024-07-25 13:15:21.630686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.560 [2024-07-25 13:15:21.630706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:29.560 [2024-07-25 13:15:21.630718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.560 [2024-07-25 13:15:21.630735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.560 [2024-07-25 13:15:21.630893] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 348.307 ms, result 0 00:19:30.508 13:15:22 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:30.508 [2024-07-25 13:15:22.653437] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:30.508 [2024-07-25 13:15:22.653593] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79847 ] 00:19:30.766 [2024-07-25 13:15:22.821353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.022 [2024-07-25 13:15:23.050613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.280 [2024-07-25 13:15:23.368673] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:31.280 [2024-07-25 13:15:23.368778] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:31.537 [2024-07-25 13:15:23.530967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.537 [2024-07-25 13:15:23.531037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:31.537 [2024-07-25 13:15:23.531059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:31.537 [2024-07-25 13:15:23.531071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.537 [2024-07-25 13:15:23.534306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.537 [2024-07-25 13:15:23.534353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:31.537 [2024-07-25 13:15:23.534372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.204 ms 00:19:31.537 [2024-07-25 13:15:23.534384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.537 [2024-07-25 13:15:23.534517] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:31.537 [2024-07-25 13:15:23.535491] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:31.537 [2024-07-25 13:15:23.535536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.537 [2024-07-25 13:15:23.535552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:31.537 [2024-07-25 13:15:23.535565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.031 ms 00:19:31.537 [2024-07-25 13:15:23.535577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.537 [2024-07-25 13:15:23.536825] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:31.537 [2024-07-25 13:15:23.553288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.537 [2024-07-25 13:15:23.553340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:31.537 [2024-07-25 13:15:23.553367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.463 ms 00:19:31.537 [2024-07-25 13:15:23.553380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.537 [2024-07-25 13:15:23.553523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.537 [2024-07-25 13:15:23.553546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:31.537 [2024-07-25 13:15:23.553559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:31.537 [2024-07-25 13:15:23.553571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.537 [2024-07-25 13:15:23.557999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.537 [2024-07-25 13:15:23.558052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:31.537 [2024-07-25 13:15:23.558070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.367 ms 00:19:31.537 [2024-07-25 13:15:23.558081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.537 [2024-07-25 13:15:23.558244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.537 [2024-07-25 13:15:23.558273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:31.537 [2024-07-25 13:15:23.558286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:31.537 [2024-07-25 13:15:23.558297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.537 [2024-07-25 13:15:23.558343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.537 [2024-07-25 13:15:23.558359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:31.537 [2024-07-25 13:15:23.558376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:31.537 [2024-07-25 13:15:23.558387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.538 [2024-07-25 13:15:23.558421] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:31.538 [2024-07-25 13:15:23.562730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.538 [2024-07-25 13:15:23.562772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:31.538 [2024-07-25 13:15:23.562789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.319 ms 00:19:31.538 [2024-07-25 13:15:23.562800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.538 [2024-07-25 13:15:23.562883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.538 [2024-07-25 13:15:23.562902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:31.538 [2024-07-25 13:15:23.562914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:31.538 [2024-07-25 13:15:23.562926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.538 [2024-07-25 13:15:23.562959] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:31.538 [2024-07-25 13:15:23.562988] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:31.538 [2024-07-25 13:15:23.563035] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:31.538 [2024-07-25 13:15:23.563056] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:31.538 [2024-07-25 13:15:23.563183] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:31.538 [2024-07-25 13:15:23.563203] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:31.538 [2024-07-25 13:15:23.563219] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:31.538 [2024-07-25 13:15:23.563234] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:31.538 [2024-07-25 13:15:23.563248] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:31.538 [2024-07-25 13:15:23.563264] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:31.538 [2024-07-25 13:15:23.563275] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:31.538 [2024-07-25 13:15:23.563287] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:31.538 [2024-07-25 13:15:23.563297] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:31.538 [2024-07-25 13:15:23.563309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.538 [2024-07-25 13:15:23.563321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:31.538 [2024-07-25 13:15:23.563333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:19:31.538 [2024-07-25 13:15:23.563344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.538 [2024-07-25 13:15:23.563449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.538 [2024-07-25 13:15:23.563465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:31.538 [2024-07-25 13:15:23.563481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:31.538 [2024-07-25 13:15:23.563492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.538 [2024-07-25 13:15:23.563646] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:31.538 [2024-07-25 13:15:23.563672] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:31.538 [2024-07-25 13:15:23.563687] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:31.538 [2024-07-25 13:15:23.563699] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.538 [2024-07-25 13:15:23.563711] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:31.538 [2024-07-25 13:15:23.563721] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:31.538 [2024-07-25 13:15:23.563732] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:31.538 [2024-07-25 13:15:23.563742] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:31.538 [2024-07-25 13:15:23.563753] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:31.538 [2024-07-25 13:15:23.563764] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:31.538 [2024-07-25 13:15:23.563775] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:31.538 [2024-07-25 13:15:23.563786] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:31.538 [2024-07-25 13:15:23.563796] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:31.538 [2024-07-25 13:15:23.563807] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:31.538 [2024-07-25 13:15:23.563817] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:31.538 [2024-07-25 13:15:23.563827] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.538 [2024-07-25 13:15:23.563839] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:31.538 [2024-07-25 13:15:23.563849] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:31.538 [2024-07-25 13:15:23.563875] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.538 [2024-07-25 13:15:23.563896] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:31.538 [2024-07-25 13:15:23.563907] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:31.538 [2024-07-25 13:15:23.563917] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:31.538 [2024-07-25 13:15:23.563928] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:31.538 [2024-07-25 13:15:23.563938] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:31.538 [2024-07-25 13:15:23.563949] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:31.538 [2024-07-25 13:15:23.563959] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:31.538 [2024-07-25 13:15:23.563969] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:31.538 [2024-07-25 13:15:23.563979] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:31.538 [2024-07-25 13:15:23.563990] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:31.538 [2024-07-25 13:15:23.564000] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:31.538 [2024-07-25 13:15:23.564010] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:31.538 [2024-07-25 13:15:23.564021] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:31.538 [2024-07-25 13:15:23.564031] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:31.538 [2024-07-25 13:15:23.564042] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:31.538 [2024-07-25 13:15:23.564052] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:31.538 [2024-07-25 13:15:23.564063] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:31.538 [2024-07-25 13:15:23.564073] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:31.538 [2024-07-25 13:15:23.564084] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:31.538 [2024-07-25 13:15:23.564095] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:31.538 [2024-07-25 13:15:23.564128] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.538 [2024-07-25 13:15:23.564142] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:31.538 [2024-07-25 13:15:23.564166] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:31.538 [2024-07-25 13:15:23.564178] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.538 [2024-07-25 13:15:23.564189] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:31.538 [2024-07-25 13:15:23.564201] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:31.538 [2024-07-25 13:15:23.564212] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:31.538 [2024-07-25 13:15:23.564223] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.538 [2024-07-25 13:15:23.564240] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:31.538 [2024-07-25 13:15:23.564251] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:31.538 [2024-07-25 13:15:23.564261] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:31.538 [2024-07-25 13:15:23.564272] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:31.538 [2024-07-25 13:15:23.564282] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:31.538 [2024-07-25 13:15:23.564292] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:31.538 [2024-07-25 13:15:23.564304] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:31.538 [2024-07-25 13:15:23.564318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:31.538 [2024-07-25 13:15:23.564332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:31.538 [2024-07-25 13:15:23.564344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:31.538 [2024-07-25 13:15:23.564355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:31.538 [2024-07-25 13:15:23.564367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:31.538 [2024-07-25 13:15:23.564378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:31.538 [2024-07-25 13:15:23.564389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:31.538 [2024-07-25 13:15:23.564401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:31.538 [2024-07-25 13:15:23.564412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:31.538 [2024-07-25 13:15:23.564423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:31.538 [2024-07-25 13:15:23.564435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:31.538 [2024-07-25 13:15:23.564446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:31.538 [2024-07-25 13:15:23.564457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:31.538 [2024-07-25 13:15:23.564468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:31.538 [2024-07-25 13:15:23.564481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:31.538 [2024-07-25 13:15:23.564492] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:31.538 [2024-07-25 13:15:23.564505] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:31.538 [2024-07-25 13:15:23.564517] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:31.538 [2024-07-25 13:15:23.564528] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:31.538 [2024-07-25 13:15:23.564540] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:31.538 [2024-07-25 13:15:23.564552] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:31.538 [2024-07-25 13:15:23.564565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.538 [2024-07-25 13:15:23.564576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:31.538 [2024-07-25 13:15:23.564588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.002 ms 00:19:31.538 [2024-07-25 13:15:23.564599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.538 [2024-07-25 13:15:23.609129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.538 [2024-07-25 13:15:23.609196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:31.538 [2024-07-25 13:15:23.609224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.452 ms 00:19:31.538 [2024-07-25 13:15:23.609237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.538 [2024-07-25 13:15:23.609447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.538 [2024-07-25 13:15:23.609469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:31.538 [2024-07-25 13:15:23.609489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:31.538 [2024-07-25 13:15:23.609501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.538 [2024-07-25 13:15:23.648117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.538 [2024-07-25 13:15:23.648181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:31.538 [2024-07-25 13:15:23.648203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.571 ms 00:19:31.538 [2024-07-25 13:15:23.648215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.538 [2024-07-25 13:15:23.648382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.538 [2024-07-25 13:15:23.648403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:31.538 [2024-07-25 13:15:23.648416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:31.538 [2024-07-25 13:15:23.648428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.538 [2024-07-25 13:15:23.648747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.538 [2024-07-25 13:15:23.648765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:31.538 [2024-07-25 13:15:23.648779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:19:31.538 [2024-07-25 13:15:23.648790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.538 [2024-07-25 13:15:23.648952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.538 [2024-07-25 13:15:23.648971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:31.538 [2024-07-25 13:15:23.648983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:19:31.538 [2024-07-25 13:15:23.649006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.538 [2024-07-25 13:15:23.665274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.538 [2024-07-25 13:15:23.665341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:31.538 [2024-07-25 13:15:23.665364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.233 ms 00:19:31.538 [2024-07-25 13:15:23.665375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.538 [2024-07-25 13:15:23.681801] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:31.538 [2024-07-25 13:15:23.681863] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:31.538 [2024-07-25 13:15:23.681887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.538 [2024-07-25 13:15:23.681900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:31.538 [2024-07-25 13:15:23.681915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.314 ms 00:19:31.538 [2024-07-25 13:15:23.681926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.538 [2024-07-25 13:15:23.712513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.538 [2024-07-25 13:15:23.712607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:31.538 [2024-07-25 13:15:23.712631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.438 ms 00:19:31.538 [2024-07-25 13:15:23.712643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.795 [2024-07-25 13:15:23.729065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.795 [2024-07-25 13:15:23.729149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:31.795 [2024-07-25 13:15:23.729171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.235 ms 00:19:31.795 [2024-07-25 13:15:23.729183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.795 [2024-07-25 13:15:23.745202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.795 [2024-07-25 13:15:23.745271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:31.795 [2024-07-25 13:15:23.745291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.864 ms 00:19:31.795 [2024-07-25 13:15:23.745303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.795 [2024-07-25 13:15:23.746194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.795 [2024-07-25 13:15:23.746232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:31.795 [2024-07-25 13:15:23.746249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.714 ms 00:19:31.795 [2024-07-25 13:15:23.746261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.795 [2024-07-25 13:15:23.821551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.795 [2024-07-25 13:15:23.821630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:31.795 [2024-07-25 13:15:23.821653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.253 ms 00:19:31.795 [2024-07-25 13:15:23.821666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.796 [2024-07-25 13:15:23.834877] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:31.796 [2024-07-25 13:15:23.849494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.796 [2024-07-25 13:15:23.849570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:31.796 [2024-07-25 13:15:23.849592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.638 ms 00:19:31.796 [2024-07-25 13:15:23.849605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.796 [2024-07-25 13:15:23.849762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.796 [2024-07-25 13:15:23.849783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:31.796 [2024-07-25 13:15:23.849796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:31.796 [2024-07-25 13:15:23.849807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.796 [2024-07-25 13:15:23.849876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.796 [2024-07-25 13:15:23.849893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:31.796 [2024-07-25 13:15:23.849905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:31.796 [2024-07-25 13:15:23.849916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.796 [2024-07-25 13:15:23.849950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.796 [2024-07-25 13:15:23.849970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:31.796 [2024-07-25 13:15:23.849982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:31.796 [2024-07-25 13:15:23.849993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.796 [2024-07-25 13:15:23.850030] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:31.796 [2024-07-25 13:15:23.850047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.796 [2024-07-25 13:15:23.850058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:31.796 [2024-07-25 13:15:23.850070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:31.796 [2024-07-25 13:15:23.850081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.796 [2024-07-25 13:15:23.881484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.796 [2024-07-25 13:15:23.881564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:31.796 [2024-07-25 13:15:23.881586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.345 ms 00:19:31.796 [2024-07-25 13:15:23.881598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.796 [2024-07-25 13:15:23.881776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.796 [2024-07-25 13:15:23.881799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:31.796 [2024-07-25 13:15:23.881813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:19:31.796 [2024-07-25 13:15:23.881824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.796 [2024-07-25 13:15:23.883035] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:31.796 [2024-07-25 13:15:23.887312] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 351.693 ms, result 0 00:19:31.796 [2024-07-25 13:15:23.888054] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:31.796 [2024-07-25 13:15:23.904642] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:41.691  Copying: 27/256 [MB] (27 MBps) Copying: 52/256 [MB] (24 MBps) Copying: 78/256 [MB] (25 MBps) Copying: 104/256 [MB] (26 MBps) Copying: 130/256 [MB] (25 MBps) Copying: 156/256 [MB] (26 MBps) Copying: 181/256 [MB] (25 MBps) Copying: 208/256 [MB] (26 MBps) Copying: 233/256 [MB] (25 MBps) Copying: 256/256 [MB] (average 25 MBps)[2024-07-25 13:15:33.867062] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:41.691 [2024-07-25 13:15:33.880018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.691 [2024-07-25 13:15:33.880071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:41.691 [2024-07-25 13:15:33.880126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:41.691 [2024-07-25 13:15:33.880159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.691 [2024-07-25 13:15:33.880203] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:41.952 [2024-07-25 13:15:33.883602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.952 [2024-07-25 13:15:33.883636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:41.952 [2024-07-25 13:15:33.883652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.375 ms 00:19:41.952 [2024-07-25 13:15:33.883664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.952 [2024-07-25 13:15:33.883949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.952 [2024-07-25 13:15:33.883966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:41.952 [2024-07-25 13:15:33.883979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:19:41.952 [2024-07-25 13:15:33.883990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.952 [2024-07-25 13:15:33.887740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.952 [2024-07-25 13:15:33.887770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:41.952 [2024-07-25 13:15:33.887807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.728 ms 00:19:41.952 [2024-07-25 13:15:33.887819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.952 [2024-07-25 13:15:33.895174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.952 [2024-07-25 13:15:33.895205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:41.952 [2024-07-25 13:15:33.895236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.316 ms 00:19:41.952 [2024-07-25 13:15:33.895247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.952 [2024-07-25 13:15:33.926050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.952 [2024-07-25 13:15:33.926101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:41.952 [2024-07-25 13:15:33.926171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.728 ms 00:19:41.952 [2024-07-25 13:15:33.926183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.952 [2024-07-25 13:15:33.943089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.952 [2024-07-25 13:15:33.943141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:41.952 [2024-07-25 13:15:33.943192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.820 ms 00:19:41.952 [2024-07-25 13:15:33.943211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.952 [2024-07-25 13:15:33.943399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.952 [2024-07-25 13:15:33.943420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:41.952 [2024-07-25 13:15:33.943433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:19:41.952 [2024-07-25 13:15:33.943444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.952 [2024-07-25 13:15:33.975027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.952 [2024-07-25 13:15:33.975082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:41.952 [2024-07-25 13:15:33.975101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.558 ms 00:19:41.952 [2024-07-25 13:15:33.975132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.952 [2024-07-25 13:15:34.007725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.952 [2024-07-25 13:15:34.007811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:41.952 [2024-07-25 13:15:34.007833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.513 ms 00:19:41.952 [2024-07-25 13:15:34.007844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.952 [2024-07-25 13:15:34.040756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.952 [2024-07-25 13:15:34.040838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:41.952 [2024-07-25 13:15:34.040859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.780 ms 00:19:41.952 [2024-07-25 13:15:34.040869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.952 [2024-07-25 13:15:34.073671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.952 [2024-07-25 13:15:34.073752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:41.952 [2024-07-25 13:15:34.073773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.612 ms 00:19:41.952 [2024-07-25 13:15:34.073785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.952 [2024-07-25 13:15:34.073907] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:41.952 [2024-07-25 13:15:34.073948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:41.952 [2024-07-25 13:15:34.073963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:41.952 [2024-07-25 13:15:34.073990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:41.952 [2024-07-25 13:15:34.074001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:41.952 [2024-07-25 13:15:34.074012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:41.952 [2024-07-25 13:15:34.074023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:41.952 [2024-07-25 13:15:34.074034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:41.952 [2024-07-25 13:15:34.074045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:41.952 [2024-07-25 13:15:34.074056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:41.952 [2024-07-25 13:15:34.074068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:41.953 [2024-07-25 13:15:34.074995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:41.954 [2024-07-25 13:15:34.075007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:41.954 [2024-07-25 13:15:34.075017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:41.954 [2024-07-25 13:15:34.075028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:41.954 [2024-07-25 13:15:34.075040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:41.954 [2024-07-25 13:15:34.075051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:41.954 [2024-07-25 13:15:34.075062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:41.954 [2024-07-25 13:15:34.075073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:41.954 [2024-07-25 13:15:34.075101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:41.954 [2024-07-25 13:15:34.075112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:41.954 [2024-07-25 13:15:34.075123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:41.954 [2024-07-25 13:15:34.075134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:41.954 [2024-07-25 13:15:34.075146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:41.954 [2024-07-25 13:15:34.075168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:41.954 [2024-07-25 13:15:34.075180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:41.954 [2024-07-25 13:15:34.075192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:41.954 [2024-07-25 13:15:34.075214] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:41.954 [2024-07-25 13:15:34.075225] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 953998eb-5280-4452-a782-072824cd0df1 00:19:41.954 [2024-07-25 13:15:34.075236] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:41.954 [2024-07-25 13:15:34.075247] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:41.954 [2024-07-25 13:15:34.075274] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:41.954 [2024-07-25 13:15:34.075285] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:41.954 [2024-07-25 13:15:34.075296] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:41.954 [2024-07-25 13:15:34.075306] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:41.954 [2024-07-25 13:15:34.075317] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:41.954 [2024-07-25 13:15:34.075327] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:41.954 [2024-07-25 13:15:34.075337] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:41.954 [2024-07-25 13:15:34.075348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.954 [2024-07-25 13:15:34.075359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:41.954 [2024-07-25 13:15:34.075376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.443 ms 00:19:41.954 [2024-07-25 13:15:34.075387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.954 [2024-07-25 13:15:34.092898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.954 [2024-07-25 13:15:34.093237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:41.954 [2024-07-25 13:15:34.093364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.458 ms 00:19:41.954 [2024-07-25 13:15:34.093418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.954 [2024-07-25 13:15:34.094022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.954 [2024-07-25 13:15:34.094176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:41.954 [2024-07-25 13:15:34.094299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:19:41.954 [2024-07-25 13:15:34.094435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.954 [2024-07-25 13:15:34.134537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.954 [2024-07-25 13:15:34.134848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:41.954 [2024-07-25 13:15:34.134969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.954 [2024-07-25 13:15:34.135028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.954 [2024-07-25 13:15:34.135223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.954 [2024-07-25 13:15:34.135286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:41.954 [2024-07-25 13:15:34.135389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.954 [2024-07-25 13:15:34.135439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.954 [2024-07-25 13:15:34.135543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.954 [2024-07-25 13:15:34.135607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:41.954 [2024-07-25 13:15:34.135655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.954 [2024-07-25 13:15:34.135762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.954 [2024-07-25 13:15:34.135918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.954 [2024-07-25 13:15:34.135981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:41.954 [2024-07-25 13:15:34.136155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.954 [2024-07-25 13:15:34.136210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.213 [2024-07-25 13:15:34.230954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.213 [2024-07-25 13:15:34.231293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:42.213 [2024-07-25 13:15:34.231414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.213 [2024-07-25 13:15:34.231466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.213 [2024-07-25 13:15:34.314480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.213 [2024-07-25 13:15:34.314796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:42.213 [2024-07-25 13:15:34.314918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.213 [2024-07-25 13:15:34.314970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.213 [2024-07-25 13:15:34.315197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.213 [2024-07-25 13:15:34.315254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:42.213 [2024-07-25 13:15:34.315448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.213 [2024-07-25 13:15:34.315505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.213 [2024-07-25 13:15:34.315577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.213 [2024-07-25 13:15:34.315624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:42.213 [2024-07-25 13:15:34.315663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.213 [2024-07-25 13:15:34.315769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.213 [2024-07-25 13:15:34.316012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.213 [2024-07-25 13:15:34.316164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:42.213 [2024-07-25 13:15:34.316281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.213 [2024-07-25 13:15:34.316304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.213 [2024-07-25 13:15:34.316377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.213 [2024-07-25 13:15:34.316395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:42.213 [2024-07-25 13:15:34.316407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.213 [2024-07-25 13:15:34.316419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.213 [2024-07-25 13:15:34.316473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.213 [2024-07-25 13:15:34.316488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:42.213 [2024-07-25 13:15:34.316500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.213 [2024-07-25 13:15:34.316511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.213 [2024-07-25 13:15:34.316565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.213 [2024-07-25 13:15:34.316583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:42.213 [2024-07-25 13:15:34.316595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.213 [2024-07-25 13:15:34.316611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.213 [2024-07-25 13:15:34.316777] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 436.788 ms, result 0 00:19:43.591 00:19:43.591 00:19:43.591 13:15:35 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:43.850 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:19:43.850 13:15:35 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:19:43.850 13:15:35 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:19:43.850 13:15:35 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:43.850 13:15:35 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:43.850 13:15:35 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:19:43.850 13:15:35 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:43.850 Process with pid 79783 is not found 00:19:43.850 13:15:36 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 79783 00:19:43.850 13:15:36 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 79783 ']' 00:19:43.850 13:15:36 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 79783 00:19:43.850 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (79783) - No such process 00:19:43.850 13:15:36 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 79783 is not found' 00:19:43.850 00:19:43.850 real 1m8.005s 00:19:43.850 user 1m35.580s 00:19:43.850 sys 0m6.755s 00:19:43.850 13:15:36 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:43.850 13:15:36 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:43.850 ************************************ 00:19:43.850 END TEST ftl_trim 00:19:43.850 ************************************ 00:19:44.108 13:15:36 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:19:44.108 13:15:36 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:44.108 13:15:36 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:44.108 13:15:36 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:44.108 ************************************ 00:19:44.108 START TEST ftl_restore 00:19:44.108 ************************************ 00:19:44.108 13:15:36 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:19:44.108 * Looking for test storage... 00:19:44.108 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:44.108 13:15:36 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:44.108 13:15:36 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:19:44.108 13:15:36 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:44.108 13:15:36 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:44.108 13:15:36 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:44.108 13:15:36 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:44.109 13:15:36 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:44.109 13:15:36 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:44.109 13:15:36 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:44.109 13:15:36 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:44.109 13:15:36 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:44.109 13:15:36 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:44.109 13:15:36 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:44.109 13:15:36 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:44.109 13:15:36 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:44.109 13:15:36 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:44.109 13:15:36 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:44.109 13:15:36 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:44.109 13:15:36 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:44.109 13:15:36 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:44.109 13:15:36 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:44.109 13:15:36 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:44.109 13:15:36 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:44.109 13:15:36 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:44.109 13:15:36 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:44.109 13:15:36 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:44.109 13:15:36 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:44.109 13:15:36 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:44.109 13:15:36 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:44.109 13:15:36 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:44.109 13:15:36 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:19:44.109 13:15:36 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.929BcDsPFW 00:19:44.109 13:15:36 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:44.109 13:15:36 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:19:44.109 13:15:36 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:19:44.109 13:15:36 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:44.109 13:15:36 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:19:44.109 13:15:36 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:19:44.109 13:15:36 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:19:44.109 13:15:36 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:19:44.109 13:15:36 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=80038 00:19:44.109 13:15:36 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 80038 00:19:44.109 13:15:36 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 80038 ']' 00:19:44.109 13:15:36 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:44.109 13:15:36 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.109 13:15:36 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:44.109 13:15:36 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.109 13:15:36 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:44.109 13:15:36 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:19:44.109 [2024-07-25 13:15:36.290594] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:44.109 [2024-07-25 13:15:36.290972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80038 ] 00:19:44.368 [2024-07-25 13:15:36.464727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.626 [2024-07-25 13:15:36.735368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.564 13:15:37 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:45.564 13:15:37 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:19:45.564 13:15:37 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:45.564 13:15:37 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:19:45.564 13:15:37 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:45.564 13:15:37 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:19:45.564 13:15:37 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:19:45.564 13:15:37 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:45.823 13:15:37 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:45.823 13:15:37 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:19:45.823 13:15:37 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:45.823 13:15:37 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:19:45.823 13:15:37 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:45.823 13:15:37 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:19:45.823 13:15:37 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:19:45.823 13:15:37 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:46.082 13:15:38 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:46.082 { 00:19:46.082 "name": "nvme0n1", 00:19:46.082 "aliases": [ 00:19:46.082 "a7298d77-1bba-49ca-8e07-f436b1ff3027" 00:19:46.082 ], 00:19:46.082 "product_name": "NVMe disk", 00:19:46.082 "block_size": 4096, 00:19:46.082 "num_blocks": 1310720, 00:19:46.082 "uuid": "a7298d77-1bba-49ca-8e07-f436b1ff3027", 00:19:46.082 "assigned_rate_limits": { 00:19:46.082 "rw_ios_per_sec": 0, 00:19:46.082 "rw_mbytes_per_sec": 0, 00:19:46.082 "r_mbytes_per_sec": 0, 00:19:46.082 "w_mbytes_per_sec": 0 00:19:46.082 }, 00:19:46.082 "claimed": true, 00:19:46.082 "claim_type": "read_many_write_one", 00:19:46.082 "zoned": false, 00:19:46.082 "supported_io_types": { 00:19:46.082 "read": true, 00:19:46.082 "write": true, 00:19:46.082 "unmap": true, 00:19:46.082 "flush": true, 00:19:46.082 "reset": true, 00:19:46.082 "nvme_admin": true, 00:19:46.082 "nvme_io": true, 00:19:46.082 "nvme_io_md": false, 00:19:46.082 "write_zeroes": true, 00:19:46.082 "zcopy": false, 00:19:46.082 "get_zone_info": false, 00:19:46.082 "zone_management": false, 00:19:46.082 "zone_append": false, 00:19:46.082 "compare": true, 00:19:46.082 "compare_and_write": false, 00:19:46.082 "abort": true, 00:19:46.082 "seek_hole": false, 00:19:46.082 "seek_data": false, 00:19:46.082 "copy": true, 00:19:46.082 "nvme_iov_md": false 00:19:46.082 }, 00:19:46.082 "driver_specific": { 00:19:46.082 "nvme": [ 00:19:46.082 { 00:19:46.082 "pci_address": "0000:00:11.0", 00:19:46.082 "trid": { 00:19:46.082 "trtype": "PCIe", 00:19:46.082 "traddr": "0000:00:11.0" 00:19:46.082 }, 00:19:46.082 "ctrlr_data": { 00:19:46.082 "cntlid": 0, 00:19:46.082 "vendor_id": "0x1b36", 00:19:46.082 "model_number": "QEMU NVMe Ctrl", 00:19:46.082 "serial_number": "12341", 00:19:46.082 "firmware_revision": "8.0.0", 00:19:46.082 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:46.082 "oacs": { 00:19:46.082 "security": 0, 00:19:46.082 "format": 1, 00:19:46.082 "firmware": 0, 00:19:46.082 "ns_manage": 1 00:19:46.082 }, 00:19:46.082 "multi_ctrlr": false, 00:19:46.082 "ana_reporting": false 00:19:46.082 }, 00:19:46.082 "vs": { 00:19:46.082 "nvme_version": "1.4" 00:19:46.082 }, 00:19:46.082 "ns_data": { 00:19:46.082 "id": 1, 00:19:46.082 "can_share": false 00:19:46.082 } 00:19:46.082 } 00:19:46.082 ], 00:19:46.082 "mp_policy": "active_passive" 00:19:46.082 } 00:19:46.082 } 00:19:46.082 ]' 00:19:46.082 13:15:38 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:46.082 13:15:38 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:19:46.082 13:15:38 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:46.082 13:15:38 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:19:46.082 13:15:38 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:19:46.082 13:15:38 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:19:46.082 13:15:38 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:19:46.082 13:15:38 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:46.082 13:15:38 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:19:46.082 13:15:38 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:46.082 13:15:38 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:46.340 13:15:38 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=281256ae-7e34-4298-8915-00a7e0faa7be 00:19:46.340 13:15:38 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:19:46.340 13:15:38 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 281256ae-7e34-4298-8915-00a7e0faa7be 00:19:46.909 13:15:38 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:47.168 13:15:39 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=c61b302e-3518-4b5c-876f-eda43431b89a 00:19:47.168 13:15:39 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c61b302e-3518-4b5c-876f-eda43431b89a 00:19:47.426 13:15:39 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=804f8be7-7181-4e00-95a0-fd1dcbacf88c 00:19:47.426 13:15:39 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:19:47.426 13:15:39 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 804f8be7-7181-4e00-95a0-fd1dcbacf88c 00:19:47.426 13:15:39 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:19:47.426 13:15:39 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:47.426 13:15:39 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=804f8be7-7181-4e00-95a0-fd1dcbacf88c 00:19:47.426 13:15:39 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:19:47.427 13:15:39 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 804f8be7-7181-4e00-95a0-fd1dcbacf88c 00:19:47.427 13:15:39 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=804f8be7-7181-4e00-95a0-fd1dcbacf88c 00:19:47.427 13:15:39 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:47.427 13:15:39 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:19:47.427 13:15:39 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:19:47.427 13:15:39 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 804f8be7-7181-4e00-95a0-fd1dcbacf88c 00:19:47.685 13:15:39 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:47.685 { 00:19:47.685 "name": "804f8be7-7181-4e00-95a0-fd1dcbacf88c", 00:19:47.685 "aliases": [ 00:19:47.685 "lvs/nvme0n1p0" 00:19:47.685 ], 00:19:47.685 "product_name": "Logical Volume", 00:19:47.685 "block_size": 4096, 00:19:47.685 "num_blocks": 26476544, 00:19:47.685 "uuid": "804f8be7-7181-4e00-95a0-fd1dcbacf88c", 00:19:47.685 "assigned_rate_limits": { 00:19:47.685 "rw_ios_per_sec": 0, 00:19:47.685 "rw_mbytes_per_sec": 0, 00:19:47.685 "r_mbytes_per_sec": 0, 00:19:47.685 "w_mbytes_per_sec": 0 00:19:47.685 }, 00:19:47.685 "claimed": false, 00:19:47.685 "zoned": false, 00:19:47.685 "supported_io_types": { 00:19:47.685 "read": true, 00:19:47.685 "write": true, 00:19:47.685 "unmap": true, 00:19:47.685 "flush": false, 00:19:47.685 "reset": true, 00:19:47.685 "nvme_admin": false, 00:19:47.685 "nvme_io": false, 00:19:47.685 "nvme_io_md": false, 00:19:47.685 "write_zeroes": true, 00:19:47.685 "zcopy": false, 00:19:47.686 "get_zone_info": false, 00:19:47.686 "zone_management": false, 00:19:47.686 "zone_append": false, 00:19:47.686 "compare": false, 00:19:47.686 "compare_and_write": false, 00:19:47.686 "abort": false, 00:19:47.686 "seek_hole": true, 00:19:47.686 "seek_data": true, 00:19:47.686 "copy": false, 00:19:47.686 "nvme_iov_md": false 00:19:47.686 }, 00:19:47.686 "driver_specific": { 00:19:47.686 "lvol": { 00:19:47.686 "lvol_store_uuid": "c61b302e-3518-4b5c-876f-eda43431b89a", 00:19:47.686 "base_bdev": "nvme0n1", 00:19:47.686 "thin_provision": true, 00:19:47.686 "num_allocated_clusters": 0, 00:19:47.686 "snapshot": false, 00:19:47.686 "clone": false, 00:19:47.686 "esnap_clone": false 00:19:47.686 } 00:19:47.686 } 00:19:47.686 } 00:19:47.686 ]' 00:19:47.686 13:15:39 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:47.686 13:15:39 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:19:47.686 13:15:39 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:47.686 13:15:39 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:47.686 13:15:39 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:47.686 13:15:39 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:19:47.686 13:15:39 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:19:47.686 13:15:39 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:19:47.686 13:15:39 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:47.944 13:15:40 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:47.944 13:15:40 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:47.944 13:15:40 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 804f8be7-7181-4e00-95a0-fd1dcbacf88c 00:19:47.944 13:15:40 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=804f8be7-7181-4e00-95a0-fd1dcbacf88c 00:19:47.944 13:15:40 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:47.944 13:15:40 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:19:47.944 13:15:40 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:19:47.944 13:15:40 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 804f8be7-7181-4e00-95a0-fd1dcbacf88c 00:19:48.202 13:15:40 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:48.202 { 00:19:48.202 "name": "804f8be7-7181-4e00-95a0-fd1dcbacf88c", 00:19:48.202 "aliases": [ 00:19:48.202 "lvs/nvme0n1p0" 00:19:48.202 ], 00:19:48.202 "product_name": "Logical Volume", 00:19:48.202 "block_size": 4096, 00:19:48.202 "num_blocks": 26476544, 00:19:48.202 "uuid": "804f8be7-7181-4e00-95a0-fd1dcbacf88c", 00:19:48.202 "assigned_rate_limits": { 00:19:48.202 "rw_ios_per_sec": 0, 00:19:48.202 "rw_mbytes_per_sec": 0, 00:19:48.202 "r_mbytes_per_sec": 0, 00:19:48.202 "w_mbytes_per_sec": 0 00:19:48.202 }, 00:19:48.202 "claimed": false, 00:19:48.202 "zoned": false, 00:19:48.202 "supported_io_types": { 00:19:48.202 "read": true, 00:19:48.202 "write": true, 00:19:48.202 "unmap": true, 00:19:48.202 "flush": false, 00:19:48.202 "reset": true, 00:19:48.202 "nvme_admin": false, 00:19:48.202 "nvme_io": false, 00:19:48.202 "nvme_io_md": false, 00:19:48.202 "write_zeroes": true, 00:19:48.202 "zcopy": false, 00:19:48.202 "get_zone_info": false, 00:19:48.202 "zone_management": false, 00:19:48.202 "zone_append": false, 00:19:48.202 "compare": false, 00:19:48.202 "compare_and_write": false, 00:19:48.202 "abort": false, 00:19:48.202 "seek_hole": true, 00:19:48.202 "seek_data": true, 00:19:48.202 "copy": false, 00:19:48.202 "nvme_iov_md": false 00:19:48.202 }, 00:19:48.202 "driver_specific": { 00:19:48.202 "lvol": { 00:19:48.202 "lvol_store_uuid": "c61b302e-3518-4b5c-876f-eda43431b89a", 00:19:48.202 "base_bdev": "nvme0n1", 00:19:48.202 "thin_provision": true, 00:19:48.202 "num_allocated_clusters": 0, 00:19:48.202 "snapshot": false, 00:19:48.202 "clone": false, 00:19:48.202 "esnap_clone": false 00:19:48.202 } 00:19:48.202 } 00:19:48.202 } 00:19:48.202 ]' 00:19:48.202 13:15:40 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:48.202 13:15:40 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:19:48.202 13:15:40 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:48.460 13:15:40 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:48.460 13:15:40 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:48.460 13:15:40 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:19:48.460 13:15:40 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:19:48.460 13:15:40 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:48.718 13:15:40 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:19:48.718 13:15:40 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 804f8be7-7181-4e00-95a0-fd1dcbacf88c 00:19:48.719 13:15:40 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=804f8be7-7181-4e00-95a0-fd1dcbacf88c 00:19:48.719 13:15:40 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:48.719 13:15:40 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:19:48.719 13:15:40 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:19:48.719 13:15:40 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 804f8be7-7181-4e00-95a0-fd1dcbacf88c 00:19:48.977 13:15:40 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:48.977 { 00:19:48.977 "name": "804f8be7-7181-4e00-95a0-fd1dcbacf88c", 00:19:48.977 "aliases": [ 00:19:48.977 "lvs/nvme0n1p0" 00:19:48.977 ], 00:19:48.977 "product_name": "Logical Volume", 00:19:48.977 "block_size": 4096, 00:19:48.977 "num_blocks": 26476544, 00:19:48.977 "uuid": "804f8be7-7181-4e00-95a0-fd1dcbacf88c", 00:19:48.977 "assigned_rate_limits": { 00:19:48.977 "rw_ios_per_sec": 0, 00:19:48.977 "rw_mbytes_per_sec": 0, 00:19:48.977 "r_mbytes_per_sec": 0, 00:19:48.977 "w_mbytes_per_sec": 0 00:19:48.977 }, 00:19:48.977 "claimed": false, 00:19:48.977 "zoned": false, 00:19:48.977 "supported_io_types": { 00:19:48.977 "read": true, 00:19:48.977 "write": true, 00:19:48.977 "unmap": true, 00:19:48.977 "flush": false, 00:19:48.977 "reset": true, 00:19:48.977 "nvme_admin": false, 00:19:48.977 "nvme_io": false, 00:19:48.977 "nvme_io_md": false, 00:19:48.977 "write_zeroes": true, 00:19:48.977 "zcopy": false, 00:19:48.977 "get_zone_info": false, 00:19:48.977 "zone_management": false, 00:19:48.977 "zone_append": false, 00:19:48.977 "compare": false, 00:19:48.977 "compare_and_write": false, 00:19:48.977 "abort": false, 00:19:48.977 "seek_hole": true, 00:19:48.977 "seek_data": true, 00:19:48.977 "copy": false, 00:19:48.977 "nvme_iov_md": false 00:19:48.977 }, 00:19:48.977 "driver_specific": { 00:19:48.977 "lvol": { 00:19:48.977 "lvol_store_uuid": "c61b302e-3518-4b5c-876f-eda43431b89a", 00:19:48.977 "base_bdev": "nvme0n1", 00:19:48.977 "thin_provision": true, 00:19:48.977 "num_allocated_clusters": 0, 00:19:48.977 "snapshot": false, 00:19:48.977 "clone": false, 00:19:48.977 "esnap_clone": false 00:19:48.977 } 00:19:48.977 } 00:19:48.977 } 00:19:48.977 ]' 00:19:48.977 13:15:40 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:48.977 13:15:40 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:19:48.977 13:15:40 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:48.977 13:15:41 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:48.977 13:15:41 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:48.977 13:15:41 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:19:48.977 13:15:41 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:19:48.977 13:15:41 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 804f8be7-7181-4e00-95a0-fd1dcbacf88c --l2p_dram_limit 10' 00:19:48.977 13:15:41 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:19:48.977 13:15:41 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:19:48.977 13:15:41 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:19:48.977 13:15:41 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:19:48.977 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:19:48.977 13:15:41 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 804f8be7-7181-4e00-95a0-fd1dcbacf88c --l2p_dram_limit 10 -c nvc0n1p0 00:19:49.236 [2024-07-25 13:15:41.291917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.236 [2024-07-25 13:15:41.291987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:49.236 [2024-07-25 13:15:41.292010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:49.236 [2024-07-25 13:15:41.292025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.236 [2024-07-25 13:15:41.292129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.236 [2024-07-25 13:15:41.292155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:49.236 [2024-07-25 13:15:41.292169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:19:49.236 [2024-07-25 13:15:41.292183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.236 [2024-07-25 13:15:41.292224] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:49.236 [2024-07-25 13:15:41.293213] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:49.236 [2024-07-25 13:15:41.293248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.236 [2024-07-25 13:15:41.293266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:49.236 [2024-07-25 13:15:41.293280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.040 ms 00:19:49.236 [2024-07-25 13:15:41.293293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.236 [2024-07-25 13:15:41.293427] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID fc70e7c4-e8c8-4636-bdce-59f0dc978c04 00:19:49.236 [2024-07-25 13:15:41.294424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.236 [2024-07-25 13:15:41.294466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:49.236 [2024-07-25 13:15:41.294486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:49.236 [2024-07-25 13:15:41.294498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.236 [2024-07-25 13:15:41.299010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.236 [2024-07-25 13:15:41.299063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:49.236 [2024-07-25 13:15:41.299085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.446 ms 00:19:49.236 [2024-07-25 13:15:41.299098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.236 [2024-07-25 13:15:41.299249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.236 [2024-07-25 13:15:41.299271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:49.236 [2024-07-25 13:15:41.299287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:19:49.236 [2024-07-25 13:15:41.299299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.236 [2024-07-25 13:15:41.299397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.236 [2024-07-25 13:15:41.299415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:49.236 [2024-07-25 13:15:41.299433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:49.236 [2024-07-25 13:15:41.299445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.236 [2024-07-25 13:15:41.299483] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:49.236 [2024-07-25 13:15:41.303993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.236 [2024-07-25 13:15:41.304039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:49.236 [2024-07-25 13:15:41.304073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.523 ms 00:19:49.236 [2024-07-25 13:15:41.304087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.236 [2024-07-25 13:15:41.304149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.236 [2024-07-25 13:15:41.304172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:49.236 [2024-07-25 13:15:41.304185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:49.236 [2024-07-25 13:15:41.304199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.236 [2024-07-25 13:15:41.304256] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:49.236 [2024-07-25 13:15:41.304424] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:49.236 [2024-07-25 13:15:41.304443] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:49.236 [2024-07-25 13:15:41.304464] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:49.236 [2024-07-25 13:15:41.304479] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:49.236 [2024-07-25 13:15:41.304495] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:49.237 [2024-07-25 13:15:41.304507] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:49.237 [2024-07-25 13:15:41.304525] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:49.237 [2024-07-25 13:15:41.304536] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:49.237 [2024-07-25 13:15:41.304549] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:49.237 [2024-07-25 13:15:41.304561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.237 [2024-07-25 13:15:41.304575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:49.237 [2024-07-25 13:15:41.304588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:19:49.237 [2024-07-25 13:15:41.304601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.237 [2024-07-25 13:15:41.304695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.237 [2024-07-25 13:15:41.304714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:49.237 [2024-07-25 13:15:41.304727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:49.237 [2024-07-25 13:15:41.304742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.237 [2024-07-25 13:15:41.304851] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:49.237 [2024-07-25 13:15:41.304873] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:49.237 [2024-07-25 13:15:41.304898] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:49.237 [2024-07-25 13:15:41.304913] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.237 [2024-07-25 13:15:41.304926] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:49.237 [2024-07-25 13:15:41.304939] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:49.237 [2024-07-25 13:15:41.304950] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:49.237 [2024-07-25 13:15:41.304962] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:49.237 [2024-07-25 13:15:41.304974] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:49.237 [2024-07-25 13:15:41.304986] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:49.237 [2024-07-25 13:15:41.305009] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:49.237 [2024-07-25 13:15:41.305027] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:49.237 [2024-07-25 13:15:41.305038] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:49.237 [2024-07-25 13:15:41.305051] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:49.237 [2024-07-25 13:15:41.305062] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:49.237 [2024-07-25 13:15:41.305074] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.237 [2024-07-25 13:15:41.305085] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:49.237 [2024-07-25 13:15:41.305100] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:49.237 [2024-07-25 13:15:41.305134] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.237 [2024-07-25 13:15:41.305151] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:49.237 [2024-07-25 13:15:41.305162] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:49.237 [2024-07-25 13:15:41.305175] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:49.237 [2024-07-25 13:15:41.305186] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:49.237 [2024-07-25 13:15:41.305198] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:49.237 [2024-07-25 13:15:41.305209] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:49.237 [2024-07-25 13:15:41.305222] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:49.237 [2024-07-25 13:15:41.305233] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:49.237 [2024-07-25 13:15:41.305246] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:49.237 [2024-07-25 13:15:41.305256] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:49.237 [2024-07-25 13:15:41.305268] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:49.237 [2024-07-25 13:15:41.305278] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:49.237 [2024-07-25 13:15:41.305290] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:49.237 [2024-07-25 13:15:41.305301] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:49.237 [2024-07-25 13:15:41.305315] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:49.237 [2024-07-25 13:15:41.305325] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:49.237 [2024-07-25 13:15:41.305338] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:49.237 [2024-07-25 13:15:41.305348] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:49.237 [2024-07-25 13:15:41.305362] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:49.237 [2024-07-25 13:15:41.305372] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:49.237 [2024-07-25 13:15:41.305384] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.237 [2024-07-25 13:15:41.305394] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:49.237 [2024-07-25 13:15:41.305406] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:49.237 [2024-07-25 13:15:41.305417] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.237 [2024-07-25 13:15:41.305428] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:49.237 [2024-07-25 13:15:41.305440] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:49.237 [2024-07-25 13:15:41.305452] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:49.237 [2024-07-25 13:15:41.305463] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.237 [2024-07-25 13:15:41.305477] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:49.237 [2024-07-25 13:15:41.305488] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:49.237 [2024-07-25 13:15:41.305502] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:49.237 [2024-07-25 13:15:41.305512] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:49.237 [2024-07-25 13:15:41.305524] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:49.237 [2024-07-25 13:15:41.305535] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:49.237 [2024-07-25 13:15:41.305552] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:49.237 [2024-07-25 13:15:41.305569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:49.237 [2024-07-25 13:15:41.305585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:49.237 [2024-07-25 13:15:41.305597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:49.237 [2024-07-25 13:15:41.305610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:49.237 [2024-07-25 13:15:41.305622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:49.237 [2024-07-25 13:15:41.305635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:49.237 [2024-07-25 13:15:41.305646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:49.237 [2024-07-25 13:15:41.305661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:49.237 [2024-07-25 13:15:41.305673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:49.237 [2024-07-25 13:15:41.305686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:49.237 [2024-07-25 13:15:41.305697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:49.237 [2024-07-25 13:15:41.305713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:49.237 [2024-07-25 13:15:41.305725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:49.237 [2024-07-25 13:15:41.305738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:49.237 [2024-07-25 13:15:41.305749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:49.237 [2024-07-25 13:15:41.305763] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:49.237 [2024-07-25 13:15:41.305776] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:49.237 [2024-07-25 13:15:41.305790] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:49.237 [2024-07-25 13:15:41.305801] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:49.237 [2024-07-25 13:15:41.305985] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:49.237 [2024-07-25 13:15:41.306007] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:49.237 [2024-07-25 13:15:41.306023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.237 [2024-07-25 13:15:41.306035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:49.237 [2024-07-25 13:15:41.306050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.235 ms 00:19:49.237 [2024-07-25 13:15:41.306062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.237 [2024-07-25 13:15:41.306137] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:49.237 [2024-07-25 13:15:41.306157] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:53.425 [2024-07-25 13:15:44.986145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.425 [2024-07-25 13:15:44.986418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:53.425 [2024-07-25 13:15:44.986572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3680.017 ms 00:19:53.425 [2024-07-25 13:15:44.986729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.425 [2024-07-25 13:15:45.019802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.425 [2024-07-25 13:15:45.020159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:53.425 [2024-07-25 13:15:45.020303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.743 ms 00:19:53.425 [2024-07-25 13:15:45.020435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.425 [2024-07-25 13:15:45.020680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.425 [2024-07-25 13:15:45.020746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:53.425 [2024-07-25 13:15:45.020875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:53.425 [2024-07-25 13:15:45.021017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.425 [2024-07-25 13:15:45.060731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.425 [2024-07-25 13:15:45.061022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:53.425 [2024-07-25 13:15:45.061181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.578 ms 00:19:53.425 [2024-07-25 13:15:45.061243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.425 [2024-07-25 13:15:45.061430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.425 [2024-07-25 13:15:45.061497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:53.425 [2024-07-25 13:15:45.061641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:53.425 [2024-07-25 13:15:45.061696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.425 [2024-07-25 13:15:45.062163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.425 [2024-07-25 13:15:45.062305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:53.425 [2024-07-25 13:15:45.062428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:19:53.425 [2024-07-25 13:15:45.062565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.425 [2024-07-25 13:15:45.062770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.425 [2024-07-25 13:15:45.062834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:53.425 [2024-07-25 13:15:45.062949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:19:53.425 [2024-07-25 13:15:45.063067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.425 [2024-07-25 13:15:45.081212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.425 [2024-07-25 13:15:45.081520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:53.425 [2024-07-25 13:15:45.081675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.042 ms 00:19:53.425 [2024-07-25 13:15:45.081800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.425 [2024-07-25 13:15:45.095985] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:53.425 [2024-07-25 13:15:45.099343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.425 [2024-07-25 13:15:45.099555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:53.425 [2024-07-25 13:15:45.099587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.323 ms 00:19:53.425 [2024-07-25 13:15:45.099604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.425 [2024-07-25 13:15:45.216972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.425 [2024-07-25 13:15:45.217062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:53.425 [2024-07-25 13:15:45.217086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 117.295 ms 00:19:53.425 [2024-07-25 13:15:45.217101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.425 [2024-07-25 13:15:45.217391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.425 [2024-07-25 13:15:45.217415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:53.425 [2024-07-25 13:15:45.217429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:19:53.425 [2024-07-25 13:15:45.217446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.425 [2024-07-25 13:15:45.250345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.425 [2024-07-25 13:15:45.250441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:53.425 [2024-07-25 13:15:45.250464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.811 ms 00:19:53.425 [2024-07-25 13:15:45.250484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.425 [2024-07-25 13:15:45.282118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.425 [2024-07-25 13:15:45.282219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:53.425 [2024-07-25 13:15:45.282241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.562 ms 00:19:53.425 [2024-07-25 13:15:45.282256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.425 [2024-07-25 13:15:45.283045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.425 [2024-07-25 13:15:45.283083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:53.425 [2024-07-25 13:15:45.283119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.713 ms 00:19:53.425 [2024-07-25 13:15:45.283139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.425 [2024-07-25 13:15:45.380014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.425 [2024-07-25 13:15:45.380110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:53.425 [2024-07-25 13:15:45.380164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.770 ms 00:19:53.425 [2024-07-25 13:15:45.380184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.425 [2024-07-25 13:15:45.413532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.425 [2024-07-25 13:15:45.413646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:53.425 [2024-07-25 13:15:45.413683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.286 ms 00:19:53.425 [2024-07-25 13:15:45.413697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.425 [2024-07-25 13:15:45.445993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.425 [2024-07-25 13:15:45.446077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:53.425 [2024-07-25 13:15:45.446098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.214 ms 00:19:53.425 [2024-07-25 13:15:45.446112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.425 [2024-07-25 13:15:45.477208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.425 [2024-07-25 13:15:45.477284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:53.425 [2024-07-25 13:15:45.477305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.998 ms 00:19:53.425 [2024-07-25 13:15:45.477319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.425 [2024-07-25 13:15:45.477441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.425 [2024-07-25 13:15:45.477463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:53.425 [2024-07-25 13:15:45.477491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:53.425 [2024-07-25 13:15:45.477507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.425 [2024-07-25 13:15:45.477636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.425 [2024-07-25 13:15:45.477663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:53.426 [2024-07-25 13:15:45.477675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:53.426 [2024-07-25 13:15:45.477688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.426 [2024-07-25 13:15:45.479068] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4186.686 ms, result 0 00:19:53.426 { 00:19:53.426 "name": "ftl0", 00:19:53.426 "uuid": "fc70e7c4-e8c8-4636-bdce-59f0dc978c04" 00:19:53.426 } 00:19:53.426 13:15:45 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:19:53.426 13:15:45 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:53.684 13:15:45 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:19:53.684 13:15:45 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:53.944 [2024-07-25 13:15:46.026506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.944 [2024-07-25 13:15:46.026582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:53.944 [2024-07-25 13:15:46.026638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:53.944 [2024-07-25 13:15:46.026650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.944 [2024-07-25 13:15:46.026688] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:53.944 [2024-07-25 13:15:46.030019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.944 [2024-07-25 13:15:46.030056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:53.944 [2024-07-25 13:15:46.030087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.307 ms 00:19:53.944 [2024-07-25 13:15:46.030103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.944 [2024-07-25 13:15:46.030496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.944 [2024-07-25 13:15:46.030531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:53.944 [2024-07-25 13:15:46.030558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:19:53.944 [2024-07-25 13:15:46.030573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.944 [2024-07-25 13:15:46.033923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.944 [2024-07-25 13:15:46.033979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:53.944 [2024-07-25 13:15:46.033994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.324 ms 00:19:53.944 [2024-07-25 13:15:46.034007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.944 [2024-07-25 13:15:46.040396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.944 [2024-07-25 13:15:46.040463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:53.944 [2024-07-25 13:15:46.040495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.360 ms 00:19:53.944 [2024-07-25 13:15:46.040508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.944 [2024-07-25 13:15:46.071533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.944 [2024-07-25 13:15:46.071612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:53.944 [2024-07-25 13:15:46.071649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.899 ms 00:19:53.944 [2024-07-25 13:15:46.071663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.944 [2024-07-25 13:15:46.090311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.944 [2024-07-25 13:15:46.090383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:53.944 [2024-07-25 13:15:46.090402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.580 ms 00:19:53.944 [2024-07-25 13:15:46.090416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.944 [2024-07-25 13:15:46.090628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.944 [2024-07-25 13:15:46.090655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:53.944 [2024-07-25 13:15:46.090669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:19:53.944 [2024-07-25 13:15:46.090683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.944 [2024-07-25 13:15:46.121659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.944 [2024-07-25 13:15:46.121721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:53.944 [2024-07-25 13:15:46.121738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.920 ms 00:19:53.944 [2024-07-25 13:15:46.121752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.204 [2024-07-25 13:15:46.153734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.204 [2024-07-25 13:15:46.153799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:54.204 [2024-07-25 13:15:46.153817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.935 ms 00:19:54.204 [2024-07-25 13:15:46.153831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.204 [2024-07-25 13:15:46.185226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.204 [2024-07-25 13:15:46.185314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:54.204 [2024-07-25 13:15:46.185336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.346 ms 00:19:54.204 [2024-07-25 13:15:46.185350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.204 [2024-07-25 13:15:46.216475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.204 [2024-07-25 13:15:46.216533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:54.204 [2024-07-25 13:15:46.216553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.988 ms 00:19:54.204 [2024-07-25 13:15:46.216567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.204 [2024-07-25 13:15:46.216648] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:54.204 [2024-07-25 13:15:46.216676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:54.204 [2024-07-25 13:15:46.216694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:54.204 [2024-07-25 13:15:46.216708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:54.204 [2024-07-25 13:15:46.216720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:54.204 [2024-07-25 13:15:46.216733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:54.204 [2024-07-25 13:15:46.216744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:54.204 [2024-07-25 13:15:46.216757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:54.204 [2024-07-25 13:15:46.216769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:54.204 [2024-07-25 13:15:46.216784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:54.204 [2024-07-25 13:15:46.216796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:54.204 [2024-07-25 13:15:46.216809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:54.204 [2024-07-25 13:15:46.216820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:54.204 [2024-07-25 13:15:46.216833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:54.204 [2024-07-25 13:15:46.216844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:54.204 [2024-07-25 13:15:46.216857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:54.204 [2024-07-25 13:15:46.216869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:54.204 [2024-07-25 13:15:46.216882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.216893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.216908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.216919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.216933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.216944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.216957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.216969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.216983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.217994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.218009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.218021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.218034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.218045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:54.205 [2024-07-25 13:15:46.218067] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:54.205 [2024-07-25 13:15:46.218078] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fc70e7c4-e8c8-4636-bdce-59f0dc978c04 00:19:54.205 [2024-07-25 13:15:46.218091] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:54.205 [2024-07-25 13:15:46.218102] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:54.205 [2024-07-25 13:15:46.218115] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:54.205 [2024-07-25 13:15:46.218126] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:54.205 [2024-07-25 13:15:46.218168] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:54.206 [2024-07-25 13:15:46.218180] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:54.206 [2024-07-25 13:15:46.218209] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:54.206 [2024-07-25 13:15:46.218236] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:54.206 [2024-07-25 13:15:46.218247] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:54.206 [2024-07-25 13:15:46.218259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.206 [2024-07-25 13:15:46.218272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:54.206 [2024-07-25 13:15:46.218285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.613 ms 00:19:54.206 [2024-07-25 13:15:46.218301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.206 [2024-07-25 13:15:46.235293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.206 [2024-07-25 13:15:46.235347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:54.206 [2024-07-25 13:15:46.235366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.923 ms 00:19:54.206 [2024-07-25 13:15:46.235381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.206 [2024-07-25 13:15:46.235821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.206 [2024-07-25 13:15:46.235858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:54.206 [2024-07-25 13:15:46.235880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:19:54.206 [2024-07-25 13:15:46.235894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.206 [2024-07-25 13:15:46.288321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.206 [2024-07-25 13:15:46.288425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:54.206 [2024-07-25 13:15:46.288445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.206 [2024-07-25 13:15:46.288459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.206 [2024-07-25 13:15:46.288562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.206 [2024-07-25 13:15:46.288581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:54.206 [2024-07-25 13:15:46.288597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.206 [2024-07-25 13:15:46.288610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.206 [2024-07-25 13:15:46.288729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.206 [2024-07-25 13:15:46.288754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:54.206 [2024-07-25 13:15:46.288766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.206 [2024-07-25 13:15:46.288779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.206 [2024-07-25 13:15:46.288804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.206 [2024-07-25 13:15:46.288823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:54.206 [2024-07-25 13:15:46.288835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.206 [2024-07-25 13:15:46.288850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.206 [2024-07-25 13:15:46.388721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.206 [2024-07-25 13:15:46.388794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:54.206 [2024-07-25 13:15:46.388813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.206 [2024-07-25 13:15:46.388828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.465 [2024-07-25 13:15:46.471720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.465 [2024-07-25 13:15:46.471811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:54.465 [2024-07-25 13:15:46.471834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.465 [2024-07-25 13:15:46.471848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.465 [2024-07-25 13:15:46.471987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.465 [2024-07-25 13:15:46.472010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:54.465 [2024-07-25 13:15:46.472023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.465 [2024-07-25 13:15:46.472035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.465 [2024-07-25 13:15:46.472098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.465 [2024-07-25 13:15:46.472160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:54.465 [2024-07-25 13:15:46.472178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.465 [2024-07-25 13:15:46.472191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.465 [2024-07-25 13:15:46.472353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.465 [2024-07-25 13:15:46.472377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:54.465 [2024-07-25 13:15:46.472390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.465 [2024-07-25 13:15:46.472404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.465 [2024-07-25 13:15:46.472457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.465 [2024-07-25 13:15:46.472480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:54.465 [2024-07-25 13:15:46.472492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.465 [2024-07-25 13:15:46.472505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.465 [2024-07-25 13:15:46.472556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.465 [2024-07-25 13:15:46.472575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:54.465 [2024-07-25 13:15:46.472587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.465 [2024-07-25 13:15:46.472600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.465 [2024-07-25 13:15:46.472657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.465 [2024-07-25 13:15:46.472682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:54.465 [2024-07-25 13:15:46.472695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.465 [2024-07-25 13:15:46.472708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.465 [2024-07-25 13:15:46.472868] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 446.327 ms, result 0 00:19:54.465 true 00:19:54.465 13:15:46 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 80038 00:19:54.465 13:15:46 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 80038 ']' 00:19:54.465 13:15:46 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 80038 00:19:54.465 13:15:46 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:19:54.465 13:15:46 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:54.465 13:15:46 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80038 00:19:54.465 killing process with pid 80038 00:19:54.465 13:15:46 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:54.465 13:15:46 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:54.465 13:15:46 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80038' 00:19:54.465 13:15:46 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 80038 00:19:54.465 13:15:46 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 80038 00:19:59.735 13:15:51 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:20:03.924 262144+0 records in 00:20:03.924 262144+0 records out 00:20:03.925 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.66026 s, 230 MB/s 00:20:03.925 13:15:55 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:05.855 13:15:58 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:06.114 [2024-07-25 13:15:58.084801] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:06.114 [2024-07-25 13:15:58.084969] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80293 ] 00:20:06.114 [2024-07-25 13:15:58.252438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.372 [2024-07-25 13:15:58.486291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.629 [2024-07-25 13:15:58.803356] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:06.629 [2024-07-25 13:15:58.803434] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:06.889 [2024-07-25 13:15:58.963788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.889 [2024-07-25 13:15:58.963845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:06.889 [2024-07-25 13:15:58.963881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:06.889 [2024-07-25 13:15:58.963893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.889 [2024-07-25 13:15:58.963956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.889 [2024-07-25 13:15:58.963974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:06.889 [2024-07-25 13:15:58.963986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:06.889 [2024-07-25 13:15:58.964000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.889 [2024-07-25 13:15:58.964034] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:06.889 [2024-07-25 13:15:58.964985] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:06.889 [2024-07-25 13:15:58.965032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.889 [2024-07-25 13:15:58.965057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:06.889 [2024-07-25 13:15:58.965069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.008 ms 00:20:06.889 [2024-07-25 13:15:58.965081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.889 [2024-07-25 13:15:58.966289] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:06.889 [2024-07-25 13:15:58.982916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.889 [2024-07-25 13:15:58.982960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:06.889 [2024-07-25 13:15:58.982995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.629 ms 00:20:06.889 [2024-07-25 13:15:58.983007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.889 [2024-07-25 13:15:58.983077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.889 [2024-07-25 13:15:58.983098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:06.889 [2024-07-25 13:15:58.983147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:06.889 [2024-07-25 13:15:58.983160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.889 [2024-07-25 13:15:58.987666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.889 [2024-07-25 13:15:58.987711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:06.889 [2024-07-25 13:15:58.987744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.413 ms 00:20:06.889 [2024-07-25 13:15:58.987755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.889 [2024-07-25 13:15:58.987855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.889 [2024-07-25 13:15:58.987875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:06.889 [2024-07-25 13:15:58.987887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:20:06.889 [2024-07-25 13:15:58.987898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.889 [2024-07-25 13:15:58.987965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.889 [2024-07-25 13:15:58.987983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:06.889 [2024-07-25 13:15:58.987995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:06.889 [2024-07-25 13:15:58.988005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.889 [2024-07-25 13:15:58.988039] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:06.889 [2024-07-25 13:15:58.992302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.889 [2024-07-25 13:15:58.992342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:06.889 [2024-07-25 13:15:58.992359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.273 ms 00:20:06.889 [2024-07-25 13:15:58.992371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.889 [2024-07-25 13:15:58.992418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.889 [2024-07-25 13:15:58.992434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:06.889 [2024-07-25 13:15:58.992447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:06.889 [2024-07-25 13:15:58.992458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.889 [2024-07-25 13:15:58.992508] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:06.889 [2024-07-25 13:15:58.992547] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:06.889 [2024-07-25 13:15:58.992623] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:06.889 [2024-07-25 13:15:58.992709] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:20:06.889 [2024-07-25 13:15:58.992814] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:06.889 [2024-07-25 13:15:58.992829] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:06.889 [2024-07-25 13:15:58.992843] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:06.889 [2024-07-25 13:15:58.992857] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:06.889 [2024-07-25 13:15:58.992869] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:06.889 [2024-07-25 13:15:58.992880] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:06.889 [2024-07-25 13:15:58.992890] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:06.889 [2024-07-25 13:15:58.992900] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:06.889 [2024-07-25 13:15:58.992910] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:06.889 [2024-07-25 13:15:58.992921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.890 [2024-07-25 13:15:58.992936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:06.890 [2024-07-25 13:15:58.992947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:20:06.890 [2024-07-25 13:15:58.992958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.890 [2024-07-25 13:15:58.993083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.890 [2024-07-25 13:15:58.993101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:06.890 [2024-07-25 13:15:58.993114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:20:06.890 [2024-07-25 13:15:58.993152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.890 [2024-07-25 13:15:58.993291] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:06.890 [2024-07-25 13:15:58.993311] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:06.890 [2024-07-25 13:15:58.993329] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:06.890 [2024-07-25 13:15:58.993341] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.890 [2024-07-25 13:15:58.993353] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:06.890 [2024-07-25 13:15:58.993363] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:06.890 [2024-07-25 13:15:58.993374] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:06.890 [2024-07-25 13:15:58.993384] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:06.890 [2024-07-25 13:15:58.993394] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:06.890 [2024-07-25 13:15:58.993404] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:06.890 [2024-07-25 13:15:58.993414] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:06.890 [2024-07-25 13:15:58.993424] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:06.890 [2024-07-25 13:15:58.993434] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:06.890 [2024-07-25 13:15:58.993444] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:06.890 [2024-07-25 13:15:58.993455] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:06.890 [2024-07-25 13:15:58.993465] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.890 [2024-07-25 13:15:58.993476] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:06.890 [2024-07-25 13:15:58.993486] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:06.890 [2024-07-25 13:15:58.993496] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.890 [2024-07-25 13:15:58.993506] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:06.890 [2024-07-25 13:15:58.993529] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:06.890 [2024-07-25 13:15:58.993540] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:06.890 [2024-07-25 13:15:58.993550] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:06.890 [2024-07-25 13:15:58.993560] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:06.890 [2024-07-25 13:15:58.993570] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:06.890 [2024-07-25 13:15:58.993580] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:06.890 [2024-07-25 13:15:58.993589] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:06.890 [2024-07-25 13:15:58.993599] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:06.890 [2024-07-25 13:15:58.993609] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:06.890 [2024-07-25 13:15:58.993619] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:06.890 [2024-07-25 13:15:58.993629] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:06.890 [2024-07-25 13:15:58.993639] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:06.890 [2024-07-25 13:15:58.993649] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:06.890 [2024-07-25 13:15:58.993658] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:06.890 [2024-07-25 13:15:58.993668] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:06.890 [2024-07-25 13:15:58.993679] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:06.890 [2024-07-25 13:15:58.993689] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:06.890 [2024-07-25 13:15:58.993699] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:06.890 [2024-07-25 13:15:58.993709] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:06.890 [2024-07-25 13:15:58.993720] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.890 [2024-07-25 13:15:58.993730] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:06.890 [2024-07-25 13:15:58.993740] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:06.890 [2024-07-25 13:15:58.993750] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.890 [2024-07-25 13:15:58.993759] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:06.890 [2024-07-25 13:15:58.993770] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:06.890 [2024-07-25 13:15:58.993780] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:06.890 [2024-07-25 13:15:58.993791] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.890 [2024-07-25 13:15:58.993802] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:06.890 [2024-07-25 13:15:58.993818] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:06.890 [2024-07-25 13:15:58.993828] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:06.890 [2024-07-25 13:15:58.993853] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:06.890 [2024-07-25 13:15:58.993863] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:06.890 [2024-07-25 13:15:58.993873] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:06.890 [2024-07-25 13:15:58.993884] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:06.890 [2024-07-25 13:15:58.993913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:06.890 [2024-07-25 13:15:58.993926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:06.890 [2024-07-25 13:15:58.993937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:06.890 [2024-07-25 13:15:58.993949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:06.890 [2024-07-25 13:15:58.993960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:06.890 [2024-07-25 13:15:58.993971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:06.890 [2024-07-25 13:15:58.993982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:06.890 [2024-07-25 13:15:58.993994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:06.890 [2024-07-25 13:15:58.994005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:06.890 [2024-07-25 13:15:58.994015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:06.890 [2024-07-25 13:15:58.994026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:06.890 [2024-07-25 13:15:58.994038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:06.890 [2024-07-25 13:15:58.994049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:06.890 [2024-07-25 13:15:58.994059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:06.890 [2024-07-25 13:15:58.994071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:06.890 [2024-07-25 13:15:58.994082] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:06.890 [2024-07-25 13:15:58.994093] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:06.890 [2024-07-25 13:15:58.994109] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:06.890 [2024-07-25 13:15:58.994120] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:06.890 [2024-07-25 13:15:58.994147] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:06.890 [2024-07-25 13:15:58.994163] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:06.890 [2024-07-25 13:15:58.994177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.890 [2024-07-25 13:15:58.994189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:06.890 [2024-07-25 13:15:58.994201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.952 ms 00:20:06.890 [2024-07-25 13:15:58.994212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.890 [2024-07-25 13:15:59.037563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.890 [2024-07-25 13:15:59.037628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:06.891 [2024-07-25 13:15:59.037678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.278 ms 00:20:06.891 [2024-07-25 13:15:59.037690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.891 [2024-07-25 13:15:59.037808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.891 [2024-07-25 13:15:59.037825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:06.891 [2024-07-25 13:15:59.037837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:06.891 [2024-07-25 13:15:59.037848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.891 [2024-07-25 13:15:59.076898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.891 [2024-07-25 13:15:59.076957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:06.891 [2024-07-25 13:15:59.077034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.955 ms 00:20:06.891 [2024-07-25 13:15:59.077049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.891 [2024-07-25 13:15:59.077141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.891 [2024-07-25 13:15:59.077162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:06.891 [2024-07-25 13:15:59.077176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:06.891 [2024-07-25 13:15:59.077199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.149 [2024-07-25 13:15:59.077608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.149 [2024-07-25 13:15:59.077635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:07.149 [2024-07-25 13:15:59.077649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:20:07.149 [2024-07-25 13:15:59.077661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.149 [2024-07-25 13:15:59.077816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.149 [2024-07-25 13:15:59.077836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:07.149 [2024-07-25 13:15:59.077849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:20:07.149 [2024-07-25 13:15:59.077860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.149 [2024-07-25 13:15:59.094297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.149 [2024-07-25 13:15:59.094352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:07.149 [2024-07-25 13:15:59.094371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.405 ms 00:20:07.149 [2024-07-25 13:15:59.094388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.149 [2024-07-25 13:15:59.111008] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:07.149 [2024-07-25 13:15:59.111056] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:07.149 [2024-07-25 13:15:59.111090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.149 [2024-07-25 13:15:59.111102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:07.149 [2024-07-25 13:15:59.111115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.549 ms 00:20:07.149 [2024-07-25 13:15:59.111158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.149 [2024-07-25 13:15:59.141640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.149 [2024-07-25 13:15:59.141704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:07.149 [2024-07-25 13:15:59.141747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.416 ms 00:20:07.149 [2024-07-25 13:15:59.141773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.149 [2024-07-25 13:15:59.157824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.149 [2024-07-25 13:15:59.157869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:07.149 [2024-07-25 13:15:59.157887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.999 ms 00:20:07.149 [2024-07-25 13:15:59.157899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.149 [2024-07-25 13:15:59.173760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.149 [2024-07-25 13:15:59.173800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:07.149 [2024-07-25 13:15:59.173832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.814 ms 00:20:07.149 [2024-07-25 13:15:59.173843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.149 [2024-07-25 13:15:59.174674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.149 [2024-07-25 13:15:59.174709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:07.149 [2024-07-25 13:15:59.174726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.721 ms 00:20:07.149 [2024-07-25 13:15:59.174738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.150 [2024-07-25 13:15:59.248349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.150 [2024-07-25 13:15:59.248427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:07.150 [2024-07-25 13:15:59.248450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.585 ms 00:20:07.150 [2024-07-25 13:15:59.248462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.150 [2024-07-25 13:15:59.260921] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:07.150 [2024-07-25 13:15:59.263688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.150 [2024-07-25 13:15:59.263724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:07.150 [2024-07-25 13:15:59.263757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.145 ms 00:20:07.150 [2024-07-25 13:15:59.263784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.150 [2024-07-25 13:15:59.263894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.150 [2024-07-25 13:15:59.263914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:07.150 [2024-07-25 13:15:59.263927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:07.150 [2024-07-25 13:15:59.263937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.150 [2024-07-25 13:15:59.264026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.150 [2024-07-25 13:15:59.264049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:07.150 [2024-07-25 13:15:59.264062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:07.150 [2024-07-25 13:15:59.264072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.150 [2024-07-25 13:15:59.264102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.150 [2024-07-25 13:15:59.264117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:07.150 [2024-07-25 13:15:59.264128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:07.150 [2024-07-25 13:15:59.264177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.150 [2024-07-25 13:15:59.264235] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:07.150 [2024-07-25 13:15:59.264253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.150 [2024-07-25 13:15:59.264264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:07.150 [2024-07-25 13:15:59.264280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:07.150 [2024-07-25 13:15:59.264291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.150 [2024-07-25 13:15:59.296107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.150 [2024-07-25 13:15:59.296191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:07.150 [2024-07-25 13:15:59.296228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.787 ms 00:20:07.150 [2024-07-25 13:15:59.296240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.150 [2024-07-25 13:15:59.296369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.150 [2024-07-25 13:15:59.296395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:07.150 [2024-07-25 13:15:59.296412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:20:07.150 [2024-07-25 13:15:59.296431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.150 [2024-07-25 13:15:59.297683] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 333.356 ms, result 0 00:20:44.577  Copying: 26/1024 [MB] (26 MBps) Copying: 53/1024 [MB] (26 MBps) Copying: 80/1024 [MB] (27 MBps) Copying: 107/1024 [MB] (27 MBps) Copying: 134/1024 [MB] (26 MBps) Copying: 161/1024 [MB] (26 MBps) Copying: 188/1024 [MB] (27 MBps) Copying: 215/1024 [MB] (26 MBps) Copying: 241/1024 [MB] (26 MBps) Copying: 269/1024 [MB] (27 MBps) Copying: 296/1024 [MB] (26 MBps) Copying: 323/1024 [MB] (27 MBps) Copying: 350/1024 [MB] (27 MBps) Copying: 375/1024 [MB] (25 MBps) Copying: 403/1024 [MB] (27 MBps) Copying: 431/1024 [MB] (28 MBps) Copying: 458/1024 [MB] (26 MBps) Copying: 485/1024 [MB] (26 MBps) Copying: 512/1024 [MB] (27 MBps) Copying: 540/1024 [MB] (27 MBps) Copying: 566/1024 [MB] (26 MBps) Copying: 593/1024 [MB] (26 MBps) Copying: 620/1024 [MB] (27 MBps) Copying: 646/1024 [MB] (25 MBps) Copying: 674/1024 [MB] (28 MBps) Copying: 701/1024 [MB] (26 MBps) Copying: 729/1024 [MB] (28 MBps) Copying: 756/1024 [MB] (27 MBps) Copying: 784/1024 [MB] (27 MBps) Copying: 810/1024 [MB] (25 MBps) Copying: 837/1024 [MB] (26 MBps) Copying: 866/1024 [MB] (29 MBps) Copying: 898/1024 [MB] (31 MBps) Copying: 928/1024 [MB] (30 MBps) Copying: 957/1024 [MB] (28 MBps) Copying: 988/1024 [MB] (31 MBps) Copying: 1020/1024 [MB] (31 MBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-07-25 13:16:36.429657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.577 [2024-07-25 13:16:36.429723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:44.577 [2024-07-25 13:16:36.429745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:44.577 [2024-07-25 13:16:36.429758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.577 [2024-07-25 13:16:36.429790] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:44.577 [2024-07-25 13:16:36.433152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.577 [2024-07-25 13:16:36.433190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:44.577 [2024-07-25 13:16:36.433208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.338 ms 00:20:44.577 [2024-07-25 13:16:36.433219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.577 [2024-07-25 13:16:36.434665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.577 [2024-07-25 13:16:36.434710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:44.577 [2024-07-25 13:16:36.434728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.411 ms 00:20:44.577 [2024-07-25 13:16:36.434741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.577 [2024-07-25 13:16:36.450984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.577 [2024-07-25 13:16:36.451034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:44.577 [2024-07-25 13:16:36.451053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.220 ms 00:20:44.577 [2024-07-25 13:16:36.451065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.577 [2024-07-25 13:16:36.457765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.577 [2024-07-25 13:16:36.457811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:44.577 [2024-07-25 13:16:36.457827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.641 ms 00:20:44.577 [2024-07-25 13:16:36.457839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.577 [2024-07-25 13:16:36.488757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.577 [2024-07-25 13:16:36.488814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:44.577 [2024-07-25 13:16:36.488834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.849 ms 00:20:44.577 [2024-07-25 13:16:36.488845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.577 [2024-07-25 13:16:36.506562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.577 [2024-07-25 13:16:36.506614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:44.577 [2024-07-25 13:16:36.506633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.663 ms 00:20:44.577 [2024-07-25 13:16:36.506646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.577 [2024-07-25 13:16:36.506840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.577 [2024-07-25 13:16:36.506862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:44.577 [2024-07-25 13:16:36.506875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:20:44.577 [2024-07-25 13:16:36.506892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.577 [2024-07-25 13:16:36.537956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.577 [2024-07-25 13:16:36.538007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:44.577 [2024-07-25 13:16:36.538027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.042 ms 00:20:44.577 [2024-07-25 13:16:36.538038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.577 [2024-07-25 13:16:36.568937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.577 [2024-07-25 13:16:36.569023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:44.577 [2024-07-25 13:16:36.569046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.851 ms 00:20:44.577 [2024-07-25 13:16:36.569058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.577 [2024-07-25 13:16:36.599609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.577 [2024-07-25 13:16:36.599663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:44.577 [2024-07-25 13:16:36.599684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.480 ms 00:20:44.577 [2024-07-25 13:16:36.599711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.577 [2024-07-25 13:16:36.630396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.577 [2024-07-25 13:16:36.630454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:44.577 [2024-07-25 13:16:36.630474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.586 ms 00:20:44.577 [2024-07-25 13:16:36.630486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.577 [2024-07-25 13:16:36.630536] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:44.577 [2024-07-25 13:16:36.630560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:44.577 [2024-07-25 13:16:36.630575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:44.577 [2024-07-25 13:16:36.630587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:44.577 [2024-07-25 13:16:36.630599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:44.577 [2024-07-25 13:16:36.630611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:44.577 [2024-07-25 13:16:36.630622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:44.577 [2024-07-25 13:16:36.630635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:44.577 [2024-07-25 13:16:36.630646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:44.577 [2024-07-25 13:16:36.630658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:44.577 [2024-07-25 13:16:36.630670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:44.577 [2024-07-25 13:16:36.630682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:44.577 [2024-07-25 13:16:36.630693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:44.577 [2024-07-25 13:16:36.630706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:44.577 [2024-07-25 13:16:36.630717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:44.577 [2024-07-25 13:16:36.630729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:44.577 [2024-07-25 13:16:36.630741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:44.577 [2024-07-25 13:16:36.630752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.630764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.630776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.630788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.630800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.630812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.630823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.630835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.630847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.630858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.630872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.630884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.630896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.630908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.630920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.630932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.630944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.630955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.630967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.630979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.630991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:44.578 [2024-07-25 13:16:36.631789] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:44.578 [2024-07-25 13:16:36.631800] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fc70e7c4-e8c8-4636-bdce-59f0dc978c04 00:20:44.578 [2024-07-25 13:16:36.631823] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:44.578 [2024-07-25 13:16:36.631840] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:44.578 [2024-07-25 13:16:36.631851] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:44.578 [2024-07-25 13:16:36.631862] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:44.579 [2024-07-25 13:16:36.631872] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:44.579 [2024-07-25 13:16:36.631883] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:44.579 [2024-07-25 13:16:36.631894] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:44.579 [2024-07-25 13:16:36.631904] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:44.579 [2024-07-25 13:16:36.631914] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:44.579 [2024-07-25 13:16:36.631925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.579 [2024-07-25 13:16:36.631936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:44.579 [2024-07-25 13:16:36.631948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.391 ms 00:20:44.579 [2024-07-25 13:16:36.631963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.579 [2024-07-25 13:16:36.648420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.579 [2024-07-25 13:16:36.648470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:44.579 [2024-07-25 13:16:36.648489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.414 ms 00:20:44.579 [2024-07-25 13:16:36.648516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.579 [2024-07-25 13:16:36.648947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.579 [2024-07-25 13:16:36.648979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:44.579 [2024-07-25 13:16:36.649004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:20:44.579 [2024-07-25 13:16:36.649017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.579 [2024-07-25 13:16:36.685792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.579 [2024-07-25 13:16:36.685860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:44.579 [2024-07-25 13:16:36.685880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.579 [2024-07-25 13:16:36.685891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.579 [2024-07-25 13:16:36.685972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.579 [2024-07-25 13:16:36.685988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:44.579 [2024-07-25 13:16:36.686000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.579 [2024-07-25 13:16:36.686011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.579 [2024-07-25 13:16:36.686135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.579 [2024-07-25 13:16:36.686157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:44.579 [2024-07-25 13:16:36.686182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.579 [2024-07-25 13:16:36.686194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.579 [2024-07-25 13:16:36.686218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.579 [2024-07-25 13:16:36.686232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:44.579 [2024-07-25 13:16:36.686244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.579 [2024-07-25 13:16:36.686255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.848 [2024-07-25 13:16:36.786011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.848 [2024-07-25 13:16:36.786085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:44.848 [2024-07-25 13:16:36.786132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.848 [2024-07-25 13:16:36.786150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.848 [2024-07-25 13:16:36.871354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.848 [2024-07-25 13:16:36.871413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:44.848 [2024-07-25 13:16:36.871434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.848 [2024-07-25 13:16:36.871451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.848 [2024-07-25 13:16:36.871566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.848 [2024-07-25 13:16:36.871594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:44.848 [2024-07-25 13:16:36.871606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.848 [2024-07-25 13:16:36.871617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.848 [2024-07-25 13:16:36.871666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.848 [2024-07-25 13:16:36.871681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:44.848 [2024-07-25 13:16:36.871693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.848 [2024-07-25 13:16:36.871705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.848 [2024-07-25 13:16:36.871837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.848 [2024-07-25 13:16:36.871856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:44.848 [2024-07-25 13:16:36.871874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.848 [2024-07-25 13:16:36.871885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.848 [2024-07-25 13:16:36.871937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.848 [2024-07-25 13:16:36.871955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:44.849 [2024-07-25 13:16:36.871967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.849 [2024-07-25 13:16:36.871978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.849 [2024-07-25 13:16:36.872022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.849 [2024-07-25 13:16:36.872037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:44.849 [2024-07-25 13:16:36.872056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.849 [2024-07-25 13:16:36.872067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.849 [2024-07-25 13:16:36.872136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.849 [2024-07-25 13:16:36.872156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:44.849 [2024-07-25 13:16:36.872168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.849 [2024-07-25 13:16:36.872179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.849 [2024-07-25 13:16:36.872348] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 442.644 ms, result 0 00:20:46.750 00:20:46.750 00:20:46.750 13:16:38 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:20:46.750 [2024-07-25 13:16:38.628880] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:46.750 [2024-07-25 13:16:38.629039] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80696 ] 00:20:46.750 [2024-07-25 13:16:38.793798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.009 [2024-07-25 13:16:39.053497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.267 [2024-07-25 13:16:39.387226] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:47.267 [2024-07-25 13:16:39.387333] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:47.527 [2024-07-25 13:16:39.549785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.527 [2024-07-25 13:16:39.549882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:47.527 [2024-07-25 13:16:39.549913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:47.527 [2024-07-25 13:16:39.549927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.527 [2024-07-25 13:16:39.550018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.527 [2024-07-25 13:16:39.550038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:47.527 [2024-07-25 13:16:39.550052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:20:47.527 [2024-07-25 13:16:39.550068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.527 [2024-07-25 13:16:39.550126] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:47.527 [2024-07-25 13:16:39.551203] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:47.527 [2024-07-25 13:16:39.551254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.527 [2024-07-25 13:16:39.551271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:47.527 [2024-07-25 13:16:39.551285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.158 ms 00:20:47.527 [2024-07-25 13:16:39.551298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.527 [2024-07-25 13:16:39.552542] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:47.527 [2024-07-25 13:16:39.570083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.527 [2024-07-25 13:16:39.570201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:47.527 [2024-07-25 13:16:39.570225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.534 ms 00:20:47.527 [2024-07-25 13:16:39.570239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.527 [2024-07-25 13:16:39.570381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.527 [2024-07-25 13:16:39.570407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:47.527 [2024-07-25 13:16:39.570421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:47.527 [2024-07-25 13:16:39.570433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.527 [2024-07-25 13:16:39.575691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.527 [2024-07-25 13:16:39.575766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:47.527 [2024-07-25 13:16:39.575787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.106 ms 00:20:47.527 [2024-07-25 13:16:39.575801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.527 [2024-07-25 13:16:39.575937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.527 [2024-07-25 13:16:39.575961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:47.527 [2024-07-25 13:16:39.575975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:20:47.527 [2024-07-25 13:16:39.575987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.527 [2024-07-25 13:16:39.576078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.527 [2024-07-25 13:16:39.576097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:47.527 [2024-07-25 13:16:39.576141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:47.527 [2024-07-25 13:16:39.576155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.527 [2024-07-25 13:16:39.576196] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:47.527 [2024-07-25 13:16:39.580671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.527 [2024-07-25 13:16:39.580738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:47.527 [2024-07-25 13:16:39.580758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.484 ms 00:20:47.527 [2024-07-25 13:16:39.580771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.527 [2024-07-25 13:16:39.580849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.527 [2024-07-25 13:16:39.580867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:47.527 [2024-07-25 13:16:39.580881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:47.527 [2024-07-25 13:16:39.580893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.527 [2024-07-25 13:16:39.581000] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:47.527 [2024-07-25 13:16:39.581041] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:47.527 [2024-07-25 13:16:39.581087] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:47.527 [2024-07-25 13:16:39.581144] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:20:47.527 [2024-07-25 13:16:39.581261] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:47.527 [2024-07-25 13:16:39.581278] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:47.527 [2024-07-25 13:16:39.581293] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:47.527 [2024-07-25 13:16:39.581309] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:47.527 [2024-07-25 13:16:39.581323] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:47.527 [2024-07-25 13:16:39.581336] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:47.527 [2024-07-25 13:16:39.581348] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:47.527 [2024-07-25 13:16:39.581360] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:47.527 [2024-07-25 13:16:39.581371] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:47.527 [2024-07-25 13:16:39.581384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.527 [2024-07-25 13:16:39.581403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:47.527 [2024-07-25 13:16:39.581416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.404 ms 00:20:47.527 [2024-07-25 13:16:39.581427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.527 [2024-07-25 13:16:39.581529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.527 [2024-07-25 13:16:39.581554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:47.527 [2024-07-25 13:16:39.581568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:47.527 [2024-07-25 13:16:39.581580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.527 [2024-07-25 13:16:39.581693] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:47.527 [2024-07-25 13:16:39.581713] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:47.527 [2024-07-25 13:16:39.581733] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:47.527 [2024-07-25 13:16:39.581746] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:47.527 [2024-07-25 13:16:39.581758] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:47.527 [2024-07-25 13:16:39.581769] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:47.527 [2024-07-25 13:16:39.581781] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:47.527 [2024-07-25 13:16:39.581792] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:47.527 [2024-07-25 13:16:39.581803] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:47.527 [2024-07-25 13:16:39.581815] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:47.527 [2024-07-25 13:16:39.581826] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:47.527 [2024-07-25 13:16:39.581837] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:47.527 [2024-07-25 13:16:39.581849] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:47.527 [2024-07-25 13:16:39.581860] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:47.527 [2024-07-25 13:16:39.581871] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:47.527 [2024-07-25 13:16:39.581882] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:47.528 [2024-07-25 13:16:39.581894] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:47.528 [2024-07-25 13:16:39.581908] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:47.528 [2024-07-25 13:16:39.581919] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:47.528 [2024-07-25 13:16:39.581932] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:47.528 [2024-07-25 13:16:39.581959] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:47.528 [2024-07-25 13:16:39.581971] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:47.528 [2024-07-25 13:16:39.581982] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:47.528 [2024-07-25 13:16:39.581993] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:47.528 [2024-07-25 13:16:39.582004] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:47.528 [2024-07-25 13:16:39.582015] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:47.528 [2024-07-25 13:16:39.582026] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:47.528 [2024-07-25 13:16:39.582038] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:47.528 [2024-07-25 13:16:39.582049] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:47.528 [2024-07-25 13:16:39.582060] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:47.528 [2024-07-25 13:16:39.582071] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:47.528 [2024-07-25 13:16:39.582082] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:47.528 [2024-07-25 13:16:39.582093] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:47.528 [2024-07-25 13:16:39.582123] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:47.528 [2024-07-25 13:16:39.582138] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:47.528 [2024-07-25 13:16:39.582150] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:47.528 [2024-07-25 13:16:39.582161] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:47.528 [2024-07-25 13:16:39.582172] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:47.528 [2024-07-25 13:16:39.582183] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:47.528 [2024-07-25 13:16:39.582194] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:47.528 [2024-07-25 13:16:39.582205] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:47.528 [2024-07-25 13:16:39.582216] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:47.528 [2024-07-25 13:16:39.582226] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:47.528 [2024-07-25 13:16:39.582238] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:47.528 [2024-07-25 13:16:39.582256] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:47.528 [2024-07-25 13:16:39.582272] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:47.528 [2024-07-25 13:16:39.582284] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:47.528 [2024-07-25 13:16:39.582296] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:47.528 [2024-07-25 13:16:39.582308] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:47.528 [2024-07-25 13:16:39.582320] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:47.528 [2024-07-25 13:16:39.582332] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:47.528 [2024-07-25 13:16:39.582343] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:47.528 [2024-07-25 13:16:39.582354] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:47.528 [2024-07-25 13:16:39.582368] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:47.528 [2024-07-25 13:16:39.582383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:47.528 [2024-07-25 13:16:39.582396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:47.528 [2024-07-25 13:16:39.582409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:47.528 [2024-07-25 13:16:39.582421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:47.528 [2024-07-25 13:16:39.582433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:47.528 [2024-07-25 13:16:39.582445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:47.528 [2024-07-25 13:16:39.582457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:47.528 [2024-07-25 13:16:39.582470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:47.528 [2024-07-25 13:16:39.582482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:47.528 [2024-07-25 13:16:39.582494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:47.528 [2024-07-25 13:16:39.582506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:47.528 [2024-07-25 13:16:39.582518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:47.528 [2024-07-25 13:16:39.582530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:47.528 [2024-07-25 13:16:39.582542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:47.528 [2024-07-25 13:16:39.582554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:47.528 [2024-07-25 13:16:39.582567] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:47.528 [2024-07-25 13:16:39.582580] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:47.528 [2024-07-25 13:16:39.582600] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:47.528 [2024-07-25 13:16:39.582612] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:47.528 [2024-07-25 13:16:39.582625] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:47.528 [2024-07-25 13:16:39.582637] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:47.528 [2024-07-25 13:16:39.582650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.528 [2024-07-25 13:16:39.582663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:47.528 [2024-07-25 13:16:39.582675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.028 ms 00:20:47.528 [2024-07-25 13:16:39.582687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.528 [2024-07-25 13:16:39.632133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.528 [2024-07-25 13:16:39.632203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:47.528 [2024-07-25 13:16:39.632226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.373 ms 00:20:47.528 [2024-07-25 13:16:39.632240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.528 [2024-07-25 13:16:39.632368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.528 [2024-07-25 13:16:39.632387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:47.528 [2024-07-25 13:16:39.632401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:47.528 [2024-07-25 13:16:39.632414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.528 [2024-07-25 13:16:39.673694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.528 [2024-07-25 13:16:39.673763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:47.528 [2024-07-25 13:16:39.673784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.171 ms 00:20:47.528 [2024-07-25 13:16:39.673797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.528 [2024-07-25 13:16:39.673873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.528 [2024-07-25 13:16:39.673891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:47.528 [2024-07-25 13:16:39.673904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:47.528 [2024-07-25 13:16:39.673925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.528 [2024-07-25 13:16:39.674374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.528 [2024-07-25 13:16:39.674400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:47.528 [2024-07-25 13:16:39.674416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:20:47.528 [2024-07-25 13:16:39.674436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.528 [2024-07-25 13:16:39.674610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.528 [2024-07-25 13:16:39.674631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:47.528 [2024-07-25 13:16:39.674645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:20:47.528 [2024-07-25 13:16:39.674657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.528 [2024-07-25 13:16:39.691255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.528 [2024-07-25 13:16:39.691317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:47.528 [2024-07-25 13:16:39.691339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.562 ms 00:20:47.528 [2024-07-25 13:16:39.691358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.528 [2024-07-25 13:16:39.708002] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:47.528 [2024-07-25 13:16:39.708059] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:47.528 [2024-07-25 13:16:39.708081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.528 [2024-07-25 13:16:39.708094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:47.528 [2024-07-25 13:16:39.708129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.543 ms 00:20:47.528 [2024-07-25 13:16:39.708144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.787 [2024-07-25 13:16:39.738601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.787 [2024-07-25 13:16:39.738686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:47.787 [2024-07-25 13:16:39.738715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.386 ms 00:20:47.787 [2024-07-25 13:16:39.738728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.787 [2024-07-25 13:16:39.755127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.787 [2024-07-25 13:16:39.755182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:47.787 [2024-07-25 13:16:39.755203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.319 ms 00:20:47.787 [2024-07-25 13:16:39.755215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.787 [2024-07-25 13:16:39.771030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.787 [2024-07-25 13:16:39.771081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:47.787 [2024-07-25 13:16:39.771127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.752 ms 00:20:47.787 [2024-07-25 13:16:39.771145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.787 [2024-07-25 13:16:39.771984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.787 [2024-07-25 13:16:39.772025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:47.787 [2024-07-25 13:16:39.772042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.688 ms 00:20:47.787 [2024-07-25 13:16:39.772054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.787 [2024-07-25 13:16:39.849784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.787 [2024-07-25 13:16:39.849854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:47.787 [2024-07-25 13:16:39.849876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.694 ms 00:20:47.787 [2024-07-25 13:16:39.849897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.787 [2024-07-25 13:16:39.863278] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:47.787 [2024-07-25 13:16:39.866063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.787 [2024-07-25 13:16:39.866132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:47.787 [2024-07-25 13:16:39.866157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.083 ms 00:20:47.787 [2024-07-25 13:16:39.866170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.787 [2024-07-25 13:16:39.866305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.787 [2024-07-25 13:16:39.866327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:47.787 [2024-07-25 13:16:39.866341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:47.787 [2024-07-25 13:16:39.866353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.787 [2024-07-25 13:16:39.866451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.787 [2024-07-25 13:16:39.866471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:47.787 [2024-07-25 13:16:39.866483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:47.787 [2024-07-25 13:16:39.866495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.787 [2024-07-25 13:16:39.866527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.787 [2024-07-25 13:16:39.866543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:47.787 [2024-07-25 13:16:39.866556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:47.787 [2024-07-25 13:16:39.866567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.787 [2024-07-25 13:16:39.866609] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:47.787 [2024-07-25 13:16:39.866628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.787 [2024-07-25 13:16:39.866645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:47.787 [2024-07-25 13:16:39.866658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:47.787 [2024-07-25 13:16:39.866670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.787 [2024-07-25 13:16:39.898224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.787 [2024-07-25 13:16:39.898288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:47.787 [2024-07-25 13:16:39.898311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.525 ms 00:20:47.787 [2024-07-25 13:16:39.898332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.787 [2024-07-25 13:16:39.898435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.787 [2024-07-25 13:16:39.898455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:47.787 [2024-07-25 13:16:39.898469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:47.787 [2024-07-25 13:16:39.898482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.787 [2024-07-25 13:16:39.899726] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 349.440 ms, result 0 00:21:27.656  Copying: 27/1024 [MB] (27 MBps) Copying: 53/1024 [MB] (25 MBps) Copying: 79/1024 [MB] (26 MBps) Copying: 105/1024 [MB] (26 MBps) Copying: 131/1024 [MB] (25 MBps) Copying: 155/1024 [MB] (24 MBps) Copying: 180/1024 [MB] (25 MBps) Copying: 206/1024 [MB] (25 MBps) Copying: 230/1024 [MB] (24 MBps) Copying: 256/1024 [MB] (25 MBps) Copying: 282/1024 [MB] (25 MBps) Copying: 306/1024 [MB] (24 MBps) Copying: 333/1024 [MB] (26 MBps) Copying: 359/1024 [MB] (25 MBps) Copying: 384/1024 [MB] (25 MBps) Copying: 409/1024 [MB] (25 MBps) Copying: 433/1024 [MB] (23 MBps) Copying: 460/1024 [MB] (26 MBps) Copying: 487/1024 [MB] (27 MBps) Copying: 514/1024 [MB] (26 MBps) Copying: 540/1024 [MB] (26 MBps) Copying: 567/1024 [MB] (26 MBps) Copying: 594/1024 [MB] (26 MBps) Copying: 621/1024 [MB] (27 MBps) Copying: 647/1024 [MB] (26 MBps) Copying: 675/1024 [MB] (27 MBps) Copying: 703/1024 [MB] (27 MBps) Copying: 730/1024 [MB] (27 MBps) Copying: 757/1024 [MB] (27 MBps) Copying: 783/1024 [MB] (26 MBps) Copying: 809/1024 [MB] (25 MBps) Copying: 835/1024 [MB] (25 MBps) Copying: 862/1024 [MB] (27 MBps) Copying: 889/1024 [MB] (27 MBps) Copying: 917/1024 [MB] (27 MBps) Copying: 944/1024 [MB] (27 MBps) Copying: 972/1024 [MB] (27 MBps) Copying: 1001/1024 [MB] (28 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-07-25 13:17:19.613932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.656 [2024-07-25 13:17:19.614079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:27.656 [2024-07-25 13:17:19.614171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:27.656 [2024-07-25 13:17:19.614214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.656 [2024-07-25 13:17:19.614308] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:27.656 [2024-07-25 13:17:19.621496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.656 [2024-07-25 13:17:19.621540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:27.656 [2024-07-25 13:17:19.621559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.582 ms 00:21:27.656 [2024-07-25 13:17:19.621580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.656 [2024-07-25 13:17:19.621907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.656 [2024-07-25 13:17:19.621960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:27.656 [2024-07-25 13:17:19.621989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:21:27.656 [2024-07-25 13:17:19.622012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.656 [2024-07-25 13:17:19.625704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.656 [2024-07-25 13:17:19.625758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:27.656 [2024-07-25 13:17:19.625775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.658 ms 00:21:27.656 [2024-07-25 13:17:19.625787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.656 [2024-07-25 13:17:19.632630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.656 [2024-07-25 13:17:19.632673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:27.656 [2024-07-25 13:17:19.632689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.803 ms 00:21:27.656 [2024-07-25 13:17:19.632702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.656 [2024-07-25 13:17:19.664889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.656 [2024-07-25 13:17:19.664945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:27.656 [2024-07-25 13:17:19.664965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.076 ms 00:21:27.656 [2024-07-25 13:17:19.664987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.656 [2024-07-25 13:17:19.683055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.656 [2024-07-25 13:17:19.683134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:27.656 [2024-07-25 13:17:19.683158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.994 ms 00:21:27.656 [2024-07-25 13:17:19.683171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.656 [2024-07-25 13:17:19.683399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.656 [2024-07-25 13:17:19.683434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:27.656 [2024-07-25 13:17:19.683457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:21:27.656 [2024-07-25 13:17:19.683469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.656 [2024-07-25 13:17:19.715167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.656 [2024-07-25 13:17:19.715228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:27.656 [2024-07-25 13:17:19.715249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.671 ms 00:21:27.656 [2024-07-25 13:17:19.715261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.656 [2024-07-25 13:17:19.746421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.656 [2024-07-25 13:17:19.746476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:27.656 [2024-07-25 13:17:19.746495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.098 ms 00:21:27.656 [2024-07-25 13:17:19.746507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.656 [2024-07-25 13:17:19.777213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.656 [2024-07-25 13:17:19.777275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:27.656 [2024-07-25 13:17:19.777316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.648 ms 00:21:27.656 [2024-07-25 13:17:19.777329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.656 [2024-07-25 13:17:19.808425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.656 [2024-07-25 13:17:19.808485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:27.656 [2024-07-25 13:17:19.808505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.943 ms 00:21:27.657 [2024-07-25 13:17:19.808517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.657 [2024-07-25 13:17:19.808577] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:27.657 [2024-07-25 13:17:19.808602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.808616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.808629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.808641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.808654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.808666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.808678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.808690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.808703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.808715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.808727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.808739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.808751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.808764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.808776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.808788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.808803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.808815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.808828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.808840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.808853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.808865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.808877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.808889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.808901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.808913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.808925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.808937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.808949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.808962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.808975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:27.657 [2024-07-25 13:17:19.809729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:27.658 [2024-07-25 13:17:19.809741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:27.658 [2024-07-25 13:17:19.809753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:27.658 [2024-07-25 13:17:19.809765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:27.658 [2024-07-25 13:17:19.809777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:27.658 [2024-07-25 13:17:19.809789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:27.658 [2024-07-25 13:17:19.809801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:27.658 [2024-07-25 13:17:19.809813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:27.658 [2024-07-25 13:17:19.809825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:27.658 [2024-07-25 13:17:19.809837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:27.658 [2024-07-25 13:17:19.809849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:27.658 [2024-07-25 13:17:19.809862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:27.658 [2024-07-25 13:17:19.809874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:27.658 [2024-07-25 13:17:19.809886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:27.658 [2024-07-25 13:17:19.809908] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:27.658 [2024-07-25 13:17:19.809920] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fc70e7c4-e8c8-4636-bdce-59f0dc978c04 00:21:27.658 [2024-07-25 13:17:19.809941] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:27.658 [2024-07-25 13:17:19.809953] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:27.658 [2024-07-25 13:17:19.809964] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:27.658 [2024-07-25 13:17:19.809976] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:27.658 [2024-07-25 13:17:19.809987] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:27.658 [2024-07-25 13:17:19.809998] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:27.658 [2024-07-25 13:17:19.810011] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:27.658 [2024-07-25 13:17:19.810022] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:27.658 [2024-07-25 13:17:19.810032] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:27.658 [2024-07-25 13:17:19.810044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.658 [2024-07-25 13:17:19.810058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:27.658 [2024-07-25 13:17:19.810076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.470 ms 00:21:27.658 [2024-07-25 13:17:19.810088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.658 [2024-07-25 13:17:19.826601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.658 [2024-07-25 13:17:19.826646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:27.658 [2024-07-25 13:17:19.826679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.444 ms 00:21:27.658 [2024-07-25 13:17:19.826691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.658 [2024-07-25 13:17:19.827151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.658 [2024-07-25 13:17:19.827177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:27.658 [2024-07-25 13:17:19.827192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.424 ms 00:21:27.658 [2024-07-25 13:17:19.827204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.917 [2024-07-25 13:17:19.864206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.917 [2024-07-25 13:17:19.864269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:27.917 [2024-07-25 13:17:19.864288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.917 [2024-07-25 13:17:19.864300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.917 [2024-07-25 13:17:19.864383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.917 [2024-07-25 13:17:19.864399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:27.917 [2024-07-25 13:17:19.864411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.917 [2024-07-25 13:17:19.864422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.917 [2024-07-25 13:17:19.864526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.917 [2024-07-25 13:17:19.864545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:27.917 [2024-07-25 13:17:19.864559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.917 [2024-07-25 13:17:19.864571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.917 [2024-07-25 13:17:19.864593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.917 [2024-07-25 13:17:19.864607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:27.917 [2024-07-25 13:17:19.864619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.917 [2024-07-25 13:17:19.864630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.917 [2024-07-25 13:17:19.963376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.917 [2024-07-25 13:17:19.963455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:27.917 [2024-07-25 13:17:19.963476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.917 [2024-07-25 13:17:19.963488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.917 [2024-07-25 13:17:20.062066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.917 [2024-07-25 13:17:20.062153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:27.917 [2024-07-25 13:17:20.062175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.917 [2024-07-25 13:17:20.062187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.917 [2024-07-25 13:17:20.062305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.917 [2024-07-25 13:17:20.062324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:27.917 [2024-07-25 13:17:20.062337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.917 [2024-07-25 13:17:20.062348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.917 [2024-07-25 13:17:20.062405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.917 [2024-07-25 13:17:20.062420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:27.917 [2024-07-25 13:17:20.062432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.917 [2024-07-25 13:17:20.062443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.917 [2024-07-25 13:17:20.062561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.917 [2024-07-25 13:17:20.062593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:27.917 [2024-07-25 13:17:20.062607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.917 [2024-07-25 13:17:20.062619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.917 [2024-07-25 13:17:20.062669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.917 [2024-07-25 13:17:20.062687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:27.917 [2024-07-25 13:17:20.062715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.917 [2024-07-25 13:17:20.062727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.917 [2024-07-25 13:17:20.062773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.917 [2024-07-25 13:17:20.062796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:27.917 [2024-07-25 13:17:20.062809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.917 [2024-07-25 13:17:20.062820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.917 [2024-07-25 13:17:20.062872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.917 [2024-07-25 13:17:20.062889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:27.917 [2024-07-25 13:17:20.062901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.917 [2024-07-25 13:17:20.062913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.917 [2024-07-25 13:17:20.063053] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 449.139 ms, result 0 00:21:29.291 00:21:29.291 00:21:29.291 13:17:21 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:31.193 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:31.193 13:17:23 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:21:31.451 [2024-07-25 13:17:23.403252] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:31.451 [2024-07-25 13:17:23.403412] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81149 ] 00:21:31.451 [2024-07-25 13:17:23.567045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.709 [2024-07-25 13:17:23.752453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.968 [2024-07-25 13:17:24.062060] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:31.968 [2024-07-25 13:17:24.062160] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:32.227 [2024-07-25 13:17:24.223020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.227 [2024-07-25 13:17:24.223120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:32.227 [2024-07-25 13:17:24.223145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:32.227 [2024-07-25 13:17:24.223158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.227 [2024-07-25 13:17:24.223240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.227 [2024-07-25 13:17:24.223261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:32.227 [2024-07-25 13:17:24.223273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:21:32.227 [2024-07-25 13:17:24.223289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.227 [2024-07-25 13:17:24.223327] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:32.228 [2024-07-25 13:17:24.224309] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:32.228 [2024-07-25 13:17:24.224354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.228 [2024-07-25 13:17:24.224370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:32.228 [2024-07-25 13:17:24.224383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.040 ms 00:21:32.228 [2024-07-25 13:17:24.224394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.228 [2024-07-25 13:17:24.225597] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:32.228 [2024-07-25 13:17:24.242673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.228 [2024-07-25 13:17:24.242775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:32.228 [2024-07-25 13:17:24.242798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.071 ms 00:21:32.228 [2024-07-25 13:17:24.242811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.228 [2024-07-25 13:17:24.242956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.228 [2024-07-25 13:17:24.242982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:32.228 [2024-07-25 13:17:24.242995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:21:32.228 [2024-07-25 13:17:24.243007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.228 [2024-07-25 13:17:24.248250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.228 [2024-07-25 13:17:24.248322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:32.228 [2024-07-25 13:17:24.248342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.057 ms 00:21:32.228 [2024-07-25 13:17:24.248355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.228 [2024-07-25 13:17:24.248488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.228 [2024-07-25 13:17:24.248512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:32.228 [2024-07-25 13:17:24.248525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:21:32.228 [2024-07-25 13:17:24.248537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.228 [2024-07-25 13:17:24.248638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.228 [2024-07-25 13:17:24.248658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:32.228 [2024-07-25 13:17:24.248671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:32.228 [2024-07-25 13:17:24.248682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.228 [2024-07-25 13:17:24.248719] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:32.228 [2024-07-25 13:17:24.253097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.228 [2024-07-25 13:17:24.253178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:32.228 [2024-07-25 13:17:24.253196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.387 ms 00:21:32.228 [2024-07-25 13:17:24.253208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.228 [2024-07-25 13:17:24.253272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.228 [2024-07-25 13:17:24.253290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:32.228 [2024-07-25 13:17:24.253303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:32.228 [2024-07-25 13:17:24.253314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.228 [2024-07-25 13:17:24.253418] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:32.228 [2024-07-25 13:17:24.253455] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:32.228 [2024-07-25 13:17:24.253502] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:32.228 [2024-07-25 13:17:24.253539] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:21:32.228 [2024-07-25 13:17:24.253651] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:32.228 [2024-07-25 13:17:24.253675] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:32.228 [2024-07-25 13:17:24.253692] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:32.228 [2024-07-25 13:17:24.253707] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:32.228 [2024-07-25 13:17:24.253737] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:32.228 [2024-07-25 13:17:24.253749] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:32.228 [2024-07-25 13:17:24.253760] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:32.228 [2024-07-25 13:17:24.253771] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:32.228 [2024-07-25 13:17:24.253782] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:32.228 [2024-07-25 13:17:24.253794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.228 [2024-07-25 13:17:24.253810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:32.228 [2024-07-25 13:17:24.253822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:21:32.228 [2024-07-25 13:17:24.253833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.228 [2024-07-25 13:17:24.253933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.228 [2024-07-25 13:17:24.253960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:32.228 [2024-07-25 13:17:24.253973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:32.228 [2024-07-25 13:17:24.253985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.228 [2024-07-25 13:17:24.254094] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:32.228 [2024-07-25 13:17:24.254132] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:32.228 [2024-07-25 13:17:24.254152] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:32.228 [2024-07-25 13:17:24.254164] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:32.228 [2024-07-25 13:17:24.254176] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:32.228 [2024-07-25 13:17:24.254186] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:32.228 [2024-07-25 13:17:24.254197] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:32.228 [2024-07-25 13:17:24.254208] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:32.228 [2024-07-25 13:17:24.254218] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:32.228 [2024-07-25 13:17:24.254228] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:32.228 [2024-07-25 13:17:24.254239] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:32.228 [2024-07-25 13:17:24.254249] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:32.228 [2024-07-25 13:17:24.254260] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:32.228 [2024-07-25 13:17:24.254270] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:32.228 [2024-07-25 13:17:24.254281] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:32.228 [2024-07-25 13:17:24.254291] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:32.228 [2024-07-25 13:17:24.254305] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:32.228 [2024-07-25 13:17:24.254315] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:32.228 [2024-07-25 13:17:24.254325] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:32.228 [2024-07-25 13:17:24.254335] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:32.228 [2024-07-25 13:17:24.254359] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:32.228 [2024-07-25 13:17:24.254369] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:32.228 [2024-07-25 13:17:24.254379] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:32.228 [2024-07-25 13:17:24.254390] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:32.228 [2024-07-25 13:17:24.254400] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:32.228 [2024-07-25 13:17:24.254410] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:32.228 [2024-07-25 13:17:24.254420] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:32.228 [2024-07-25 13:17:24.254430] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:32.228 [2024-07-25 13:17:24.254440] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:32.228 [2024-07-25 13:17:24.254450] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:32.228 [2024-07-25 13:17:24.254460] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:32.228 [2024-07-25 13:17:24.254470] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:32.228 [2024-07-25 13:17:24.254481] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:32.228 [2024-07-25 13:17:24.254491] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:32.228 [2024-07-25 13:17:24.254501] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:32.228 [2024-07-25 13:17:24.254511] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:32.228 [2024-07-25 13:17:24.254521] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:32.228 [2024-07-25 13:17:24.254531] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:32.228 [2024-07-25 13:17:24.254542] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:32.228 [2024-07-25 13:17:24.254552] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:32.228 [2024-07-25 13:17:24.254562] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:32.228 [2024-07-25 13:17:24.254572] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:32.228 [2024-07-25 13:17:24.254582] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:32.228 [2024-07-25 13:17:24.254592] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:32.228 [2024-07-25 13:17:24.254603] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:32.228 [2024-07-25 13:17:24.254614] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:32.228 [2024-07-25 13:17:24.254625] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:32.228 [2024-07-25 13:17:24.254637] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:32.229 [2024-07-25 13:17:24.254648] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:32.229 [2024-07-25 13:17:24.254658] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:32.229 [2024-07-25 13:17:24.254669] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:32.229 [2024-07-25 13:17:24.254679] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:32.229 [2024-07-25 13:17:24.254689] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:32.229 [2024-07-25 13:17:24.254701] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:32.229 [2024-07-25 13:17:24.254715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:32.229 [2024-07-25 13:17:24.254728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:32.229 [2024-07-25 13:17:24.254740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:32.229 [2024-07-25 13:17:24.254751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:32.229 [2024-07-25 13:17:24.254763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:32.229 [2024-07-25 13:17:24.254774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:32.229 [2024-07-25 13:17:24.254786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:32.229 [2024-07-25 13:17:24.254798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:32.229 [2024-07-25 13:17:24.254810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:32.229 [2024-07-25 13:17:24.254821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:32.229 [2024-07-25 13:17:24.254833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:32.229 [2024-07-25 13:17:24.254844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:32.229 [2024-07-25 13:17:24.254855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:32.229 [2024-07-25 13:17:24.254867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:32.229 [2024-07-25 13:17:24.254879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:32.229 [2024-07-25 13:17:24.254891] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:32.229 [2024-07-25 13:17:24.254904] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:32.229 [2024-07-25 13:17:24.254921] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:32.229 [2024-07-25 13:17:24.254933] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:32.229 [2024-07-25 13:17:24.254944] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:32.229 [2024-07-25 13:17:24.254956] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:32.229 [2024-07-25 13:17:24.254969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.229 [2024-07-25 13:17:24.254981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:32.229 [2024-07-25 13:17:24.254992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.942 ms 00:21:32.229 [2024-07-25 13:17:24.255003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.229 [2024-07-25 13:17:24.301253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.229 [2024-07-25 13:17:24.301340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:32.229 [2024-07-25 13:17:24.301365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.177 ms 00:21:32.229 [2024-07-25 13:17:24.301378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.229 [2024-07-25 13:17:24.301525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.229 [2024-07-25 13:17:24.301544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:32.229 [2024-07-25 13:17:24.301557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:21:32.229 [2024-07-25 13:17:24.301569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.229 [2024-07-25 13:17:24.342631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.229 [2024-07-25 13:17:24.342703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:32.229 [2024-07-25 13:17:24.342726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.928 ms 00:21:32.229 [2024-07-25 13:17:24.342738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.229 [2024-07-25 13:17:24.342829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.229 [2024-07-25 13:17:24.342847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:32.229 [2024-07-25 13:17:24.342860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:32.229 [2024-07-25 13:17:24.342878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.229 [2024-07-25 13:17:24.343323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.229 [2024-07-25 13:17:24.343354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:32.229 [2024-07-25 13:17:24.343369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:21:32.229 [2024-07-25 13:17:24.343381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.229 [2024-07-25 13:17:24.343539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.229 [2024-07-25 13:17:24.343576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:32.229 [2024-07-25 13:17:24.343590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:21:32.229 [2024-07-25 13:17:24.343601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.229 [2024-07-25 13:17:24.359806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.229 [2024-07-25 13:17:24.359882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:32.229 [2024-07-25 13:17:24.359905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.167 ms 00:21:32.229 [2024-07-25 13:17:24.359922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.229 [2024-07-25 13:17:24.376764] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:32.229 [2024-07-25 13:17:24.376871] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:32.229 [2024-07-25 13:17:24.376899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.229 [2024-07-25 13:17:24.376913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:32.229 [2024-07-25 13:17:24.376930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.753 ms 00:21:32.229 [2024-07-25 13:17:24.376942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.229 [2024-07-25 13:17:24.409272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.229 [2024-07-25 13:17:24.409417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:32.229 [2024-07-25 13:17:24.409443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.171 ms 00:21:32.229 [2024-07-25 13:17:24.409458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.487 [2024-07-25 13:17:24.426215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.487 [2024-07-25 13:17:24.426308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:32.487 [2024-07-25 13:17:24.426331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.505 ms 00:21:32.487 [2024-07-25 13:17:24.426343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.487 [2024-07-25 13:17:24.442288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.487 [2024-07-25 13:17:24.442361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:32.487 [2024-07-25 13:17:24.442382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.857 ms 00:21:32.487 [2024-07-25 13:17:24.442394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.487 [2024-07-25 13:17:24.443349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.487 [2024-07-25 13:17:24.443389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:32.487 [2024-07-25 13:17:24.443406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.697 ms 00:21:32.487 [2024-07-25 13:17:24.443417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.487 [2024-07-25 13:17:24.518101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.487 [2024-07-25 13:17:24.518224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:32.487 [2024-07-25 13:17:24.518247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.645 ms 00:21:32.487 [2024-07-25 13:17:24.518267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.487 [2024-07-25 13:17:24.531301] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:32.487 [2024-07-25 13:17:24.534089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.487 [2024-07-25 13:17:24.534140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:32.487 [2024-07-25 13:17:24.534161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.612 ms 00:21:32.487 [2024-07-25 13:17:24.534173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.487 [2024-07-25 13:17:24.534310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.487 [2024-07-25 13:17:24.534346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:32.487 [2024-07-25 13:17:24.534362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:32.487 [2024-07-25 13:17:24.534373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.487 [2024-07-25 13:17:24.534475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.487 [2024-07-25 13:17:24.534504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:32.487 [2024-07-25 13:17:24.534519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:32.487 [2024-07-25 13:17:24.534530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.487 [2024-07-25 13:17:24.534563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.488 [2024-07-25 13:17:24.534579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:32.488 [2024-07-25 13:17:24.534592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:32.488 [2024-07-25 13:17:24.534603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.488 [2024-07-25 13:17:24.534645] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:32.488 [2024-07-25 13:17:24.534669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.488 [2024-07-25 13:17:24.534686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:32.488 [2024-07-25 13:17:24.534698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:21:32.488 [2024-07-25 13:17:24.534710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.488 [2024-07-25 13:17:24.565929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.488 [2024-07-25 13:17:24.566002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:32.488 [2024-07-25 13:17:24.566023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.190 ms 00:21:32.488 [2024-07-25 13:17:24.566046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.488 [2024-07-25 13:17:24.566253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.488 [2024-07-25 13:17:24.566277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:32.488 [2024-07-25 13:17:24.566292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:32.488 [2024-07-25 13:17:24.566304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.488 [2024-07-25 13:17:24.567511] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 343.957 ms, result 0 00:22:10.339  Copying: 27/1024 [MB] (27 MBps) Copying: 55/1024 [MB] (27 MBps) Copying: 83/1024 [MB] (28 MBps) Copying: 111/1024 [MB] (27 MBps) Copying: 139/1024 [MB] (27 MBps) Copying: 168/1024 [MB] (28 MBps) Copying: 194/1024 [MB] (26 MBps) Copying: 222/1024 [MB] (27 MBps) Copying: 251/1024 [MB] (28 MBps) Copying: 279/1024 [MB] (28 MBps) Copying: 308/1024 [MB] (28 MBps) Copying: 336/1024 [MB] (28 MBps) Copying: 365/1024 [MB] (28 MBps) Copying: 391/1024 [MB] (25 MBps) Copying: 418/1024 [MB] (27 MBps) Copying: 445/1024 [MB] (27 MBps) Copying: 472/1024 [MB] (27 MBps) Copying: 499/1024 [MB] (26 MBps) Copying: 527/1024 [MB] (27 MBps) Copying: 554/1024 [MB] (27 MBps) Copying: 583/1024 [MB] (28 MBps) Copying: 612/1024 [MB] (29 MBps) Copying: 643/1024 [MB] (30 MBps) Copying: 672/1024 [MB] (29 MBps) Copying: 700/1024 [MB] (28 MBps) Copying: 728/1024 [MB] (27 MBps) Copying: 756/1024 [MB] (28 MBps) Copying: 783/1024 [MB] (26 MBps) Copying: 811/1024 [MB] (28 MBps) Copying: 839/1024 [MB] (28 MBps) Copying: 868/1024 [MB] (28 MBps) Copying: 894/1024 [MB] (26 MBps) Copying: 923/1024 [MB] (28 MBps) Copying: 952/1024 [MB] (28 MBps) Copying: 980/1024 [MB] (28 MBps) Copying: 1010/1024 [MB] (29 MBps) Copying: 1023/1024 [MB] (13 MBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-07-25 13:18:02.177466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.339 [2024-07-25 13:18:02.177544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:10.339 [2024-07-25 13:18:02.177565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:10.339 [2024-07-25 13:18:02.177579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.339 [2024-07-25 13:18:02.178637] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:10.339 [2024-07-25 13:18:02.184217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.339 [2024-07-25 13:18:02.184258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:10.339 [2024-07-25 13:18:02.184276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.525 ms 00:22:10.339 [2024-07-25 13:18:02.184289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.339 [2024-07-25 13:18:02.201637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.339 [2024-07-25 13:18:02.201715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:10.339 [2024-07-25 13:18:02.201740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.819 ms 00:22:10.339 [2024-07-25 13:18:02.201755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.339 [2024-07-25 13:18:02.226029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.339 [2024-07-25 13:18:02.226099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:10.339 [2024-07-25 13:18:02.226148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.228 ms 00:22:10.339 [2024-07-25 13:18:02.226174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.340 [2024-07-25 13:18:02.234444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.340 [2024-07-25 13:18:02.234492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:10.340 [2024-07-25 13:18:02.234512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.181 ms 00:22:10.340 [2024-07-25 13:18:02.234527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.340 [2024-07-25 13:18:02.273401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.340 [2024-07-25 13:18:02.273481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:10.340 [2024-07-25 13:18:02.273506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.775 ms 00:22:10.340 [2024-07-25 13:18:02.273527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.340 [2024-07-25 13:18:02.294712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.340 [2024-07-25 13:18:02.294786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:10.340 [2024-07-25 13:18:02.294809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.102 ms 00:22:10.340 [2024-07-25 13:18:02.294825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.340 [2024-07-25 13:18:02.377614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.340 [2024-07-25 13:18:02.377716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:10.340 [2024-07-25 13:18:02.377743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.647 ms 00:22:10.340 [2024-07-25 13:18:02.377759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.340 [2024-07-25 13:18:02.416734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.340 [2024-07-25 13:18:02.416811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:10.340 [2024-07-25 13:18:02.416835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.944 ms 00:22:10.340 [2024-07-25 13:18:02.416849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.340 [2024-07-25 13:18:02.455208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.340 [2024-07-25 13:18:02.455282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:10.340 [2024-07-25 13:18:02.455306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.282 ms 00:22:10.340 [2024-07-25 13:18:02.455321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.340 [2024-07-25 13:18:02.490764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.340 [2024-07-25 13:18:02.490829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:10.340 [2024-07-25 13:18:02.490870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.359 ms 00:22:10.340 [2024-07-25 13:18:02.490883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.340 [2024-07-25 13:18:02.522222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.340 [2024-07-25 13:18:02.522290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:10.340 [2024-07-25 13:18:02.522312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.191 ms 00:22:10.340 [2024-07-25 13:18:02.522324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.340 [2024-07-25 13:18:02.522382] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:10.340 [2024-07-25 13:18:02.522408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 120832 / 261120 wr_cnt: 1 state: open 00:22:10.340 [2024-07-25 13:18:02.522423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.522998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.523010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.523021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.523033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.523045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.523056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.523067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.523079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.523090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:10.340 [2024-07-25 13:18:02.523102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:10.341 [2024-07-25 13:18:02.523629] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:10.341 [2024-07-25 13:18:02.523641] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fc70e7c4-e8c8-4636-bdce-59f0dc978c04 00:22:10.341 [2024-07-25 13:18:02.523653] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 120832 00:22:10.341 [2024-07-25 13:18:02.523664] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 121792 00:22:10.341 [2024-07-25 13:18:02.523674] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 120832 00:22:10.341 [2024-07-25 13:18:02.523695] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0079 00:22:10.341 [2024-07-25 13:18:02.523706] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:10.341 [2024-07-25 13:18:02.523720] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:10.341 [2024-07-25 13:18:02.523745] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:10.341 [2024-07-25 13:18:02.523762] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:10.341 [2024-07-25 13:18:02.523779] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:10.341 [2024-07-25 13:18:02.523799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.341 [2024-07-25 13:18:02.523819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:10.341 [2024-07-25 13:18:02.523839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.418 ms 00:22:10.341 [2024-07-25 13:18:02.523857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.600 [2024-07-25 13:18:02.540674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.600 [2024-07-25 13:18:02.540726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:10.600 [2024-07-25 13:18:02.540761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.748 ms 00:22:10.600 [2024-07-25 13:18:02.540773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.600 [2024-07-25 13:18:02.541254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.600 [2024-07-25 13:18:02.541288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:10.600 [2024-07-25 13:18:02.541304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:22:10.600 [2024-07-25 13:18:02.541315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.600 [2024-07-25 13:18:02.578234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.600 [2024-07-25 13:18:02.578329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:10.600 [2024-07-25 13:18:02.578358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.600 [2024-07-25 13:18:02.578370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.600 [2024-07-25 13:18:02.578456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.600 [2024-07-25 13:18:02.578473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:10.600 [2024-07-25 13:18:02.578486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.600 [2024-07-25 13:18:02.578511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.600 [2024-07-25 13:18:02.578614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.600 [2024-07-25 13:18:02.578635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:10.600 [2024-07-25 13:18:02.578647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.600 [2024-07-25 13:18:02.578665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.600 [2024-07-25 13:18:02.578696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.600 [2024-07-25 13:18:02.578710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:10.600 [2024-07-25 13:18:02.578723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.600 [2024-07-25 13:18:02.578734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.600 [2024-07-25 13:18:02.678010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.600 [2024-07-25 13:18:02.678091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:10.600 [2024-07-25 13:18:02.678126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.600 [2024-07-25 13:18:02.678149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.600 [2024-07-25 13:18:02.762823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.600 [2024-07-25 13:18:02.762894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:10.600 [2024-07-25 13:18:02.762915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.600 [2024-07-25 13:18:02.762928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.600 [2024-07-25 13:18:02.763057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.600 [2024-07-25 13:18:02.763078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:10.600 [2024-07-25 13:18:02.763091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.600 [2024-07-25 13:18:02.763127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.600 [2024-07-25 13:18:02.763192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.600 [2024-07-25 13:18:02.763209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:10.600 [2024-07-25 13:18:02.763221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.600 [2024-07-25 13:18:02.763232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.600 [2024-07-25 13:18:02.763352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.600 [2024-07-25 13:18:02.763379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:10.600 [2024-07-25 13:18:02.763392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.600 [2024-07-25 13:18:02.763404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.600 [2024-07-25 13:18:02.763454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.600 [2024-07-25 13:18:02.763478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:10.600 [2024-07-25 13:18:02.763491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.600 [2024-07-25 13:18:02.763502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.600 [2024-07-25 13:18:02.763547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.600 [2024-07-25 13:18:02.763564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:10.600 [2024-07-25 13:18:02.763575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.600 [2024-07-25 13:18:02.763587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.600 [2024-07-25 13:18:02.763645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.600 [2024-07-25 13:18:02.763670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:10.600 [2024-07-25 13:18:02.763683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.600 [2024-07-25 13:18:02.763694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.600 [2024-07-25 13:18:02.763837] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 588.614 ms, result 0 00:22:12.500 00:22:12.500 00:22:12.500 13:18:04 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:22:12.500 [2024-07-25 13:18:04.357415] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:12.500 [2024-07-25 13:18:04.357591] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81553 ] 00:22:12.500 [2024-07-25 13:18:04.523996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.757 [2024-07-25 13:18:04.709978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.015 [2024-07-25 13:18:05.021549] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:13.015 [2024-07-25 13:18:05.021634] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:13.015 [2024-07-25 13:18:05.182352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.015 [2024-07-25 13:18:05.182428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:13.015 [2024-07-25 13:18:05.182451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:13.015 [2024-07-25 13:18:05.182463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.015 [2024-07-25 13:18:05.182537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.015 [2024-07-25 13:18:05.182557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:13.015 [2024-07-25 13:18:05.182570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:22:13.015 [2024-07-25 13:18:05.182585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.015 [2024-07-25 13:18:05.182622] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:13.015 [2024-07-25 13:18:05.183558] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:13.015 [2024-07-25 13:18:05.183605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.015 [2024-07-25 13:18:05.183620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:13.015 [2024-07-25 13:18:05.183633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.993 ms 00:22:13.015 [2024-07-25 13:18:05.183645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.015 [2024-07-25 13:18:05.184825] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:13.015 [2024-07-25 13:18:05.201041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.015 [2024-07-25 13:18:05.201089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:13.015 [2024-07-25 13:18:05.201127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.217 ms 00:22:13.015 [2024-07-25 13:18:05.201143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.015 [2024-07-25 13:18:05.201220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.015 [2024-07-25 13:18:05.201242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:13.015 [2024-07-25 13:18:05.201256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:22:13.015 [2024-07-25 13:18:05.201268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.274 [2024-07-25 13:18:05.205782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.274 [2024-07-25 13:18:05.205835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:13.274 [2024-07-25 13:18:05.205872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.415 ms 00:22:13.274 [2024-07-25 13:18:05.205884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.274 [2024-07-25 13:18:05.205991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.274 [2024-07-25 13:18:05.206010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:13.274 [2024-07-25 13:18:05.206023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:22:13.274 [2024-07-25 13:18:05.206035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.274 [2024-07-25 13:18:05.206124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.275 [2024-07-25 13:18:05.206145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:13.275 [2024-07-25 13:18:05.206158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:22:13.275 [2024-07-25 13:18:05.206170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.275 [2024-07-25 13:18:05.206207] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:13.275 [2024-07-25 13:18:05.210485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.275 [2024-07-25 13:18:05.210523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:13.275 [2024-07-25 13:18:05.210539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.289 ms 00:22:13.275 [2024-07-25 13:18:05.210551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.275 [2024-07-25 13:18:05.210605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.275 [2024-07-25 13:18:05.210623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:13.275 [2024-07-25 13:18:05.210636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:13.275 [2024-07-25 13:18:05.210647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.275 [2024-07-25 13:18:05.210695] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:13.275 [2024-07-25 13:18:05.210727] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:13.275 [2024-07-25 13:18:05.210771] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:13.275 [2024-07-25 13:18:05.210796] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:13.275 [2024-07-25 13:18:05.210902] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:13.275 [2024-07-25 13:18:05.210918] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:13.275 [2024-07-25 13:18:05.210933] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:13.275 [2024-07-25 13:18:05.210948] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:13.275 [2024-07-25 13:18:05.210962] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:13.275 [2024-07-25 13:18:05.210974] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:13.275 [2024-07-25 13:18:05.210986] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:13.275 [2024-07-25 13:18:05.210997] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:13.275 [2024-07-25 13:18:05.211008] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:13.275 [2024-07-25 13:18:05.211020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.275 [2024-07-25 13:18:05.211037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:13.275 [2024-07-25 13:18:05.211049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:22:13.275 [2024-07-25 13:18:05.211061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.275 [2024-07-25 13:18:05.211183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.275 [2024-07-25 13:18:05.211203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:13.275 [2024-07-25 13:18:05.211216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:22:13.275 [2024-07-25 13:18:05.211227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.275 [2024-07-25 13:18:05.211364] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:13.275 [2024-07-25 13:18:05.211383] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:13.275 [2024-07-25 13:18:05.211402] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:13.275 [2024-07-25 13:18:05.211414] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.275 [2024-07-25 13:18:05.211426] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:13.275 [2024-07-25 13:18:05.211437] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:13.275 [2024-07-25 13:18:05.211448] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:13.275 [2024-07-25 13:18:05.211459] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:13.275 [2024-07-25 13:18:05.211470] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:13.275 [2024-07-25 13:18:05.211480] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:13.275 [2024-07-25 13:18:05.211491] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:13.275 [2024-07-25 13:18:05.211502] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:13.275 [2024-07-25 13:18:05.211513] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:13.275 [2024-07-25 13:18:05.211524] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:13.275 [2024-07-25 13:18:05.211534] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:13.275 [2024-07-25 13:18:05.211545] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.275 [2024-07-25 13:18:05.211556] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:13.275 [2024-07-25 13:18:05.211567] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:13.275 [2024-07-25 13:18:05.211578] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.275 [2024-07-25 13:18:05.211589] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:13.275 [2024-07-25 13:18:05.211613] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:13.275 [2024-07-25 13:18:05.211625] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:13.275 [2024-07-25 13:18:05.211635] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:13.275 [2024-07-25 13:18:05.211647] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:13.275 [2024-07-25 13:18:05.211657] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:13.275 [2024-07-25 13:18:05.211668] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:13.275 [2024-07-25 13:18:05.211678] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:13.275 [2024-07-25 13:18:05.211689] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:13.275 [2024-07-25 13:18:05.211699] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:13.275 [2024-07-25 13:18:05.211710] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:13.275 [2024-07-25 13:18:05.211720] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:13.275 [2024-07-25 13:18:05.211731] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:13.275 [2024-07-25 13:18:05.211742] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:13.275 [2024-07-25 13:18:05.211753] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:13.275 [2024-07-25 13:18:05.211763] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:13.275 [2024-07-25 13:18:05.211774] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:13.275 [2024-07-25 13:18:05.211784] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:13.275 [2024-07-25 13:18:05.211795] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:13.275 [2024-07-25 13:18:05.211806] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:13.275 [2024-07-25 13:18:05.211817] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.275 [2024-07-25 13:18:05.211827] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:13.275 [2024-07-25 13:18:05.211838] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:13.275 [2024-07-25 13:18:05.211848] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.275 [2024-07-25 13:18:05.211858] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:13.275 [2024-07-25 13:18:05.211870] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:13.275 [2024-07-25 13:18:05.211881] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:13.275 [2024-07-25 13:18:05.211893] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.275 [2024-07-25 13:18:05.211904] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:13.275 [2024-07-25 13:18:05.211916] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:13.275 [2024-07-25 13:18:05.211927] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:13.275 [2024-07-25 13:18:05.211938] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:13.275 [2024-07-25 13:18:05.211948] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:13.275 [2024-07-25 13:18:05.211959] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:13.275 [2024-07-25 13:18:05.211972] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:13.275 [2024-07-25 13:18:05.211987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:13.275 [2024-07-25 13:18:05.212000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:13.275 [2024-07-25 13:18:05.212012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:13.275 [2024-07-25 13:18:05.212024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:13.275 [2024-07-25 13:18:05.212035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:13.276 [2024-07-25 13:18:05.212047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:13.276 [2024-07-25 13:18:05.212059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:13.276 [2024-07-25 13:18:05.212070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:13.276 [2024-07-25 13:18:05.212082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:13.276 [2024-07-25 13:18:05.212094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:13.276 [2024-07-25 13:18:05.212122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:13.276 [2024-07-25 13:18:05.212137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:13.276 [2024-07-25 13:18:05.212149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:13.276 [2024-07-25 13:18:05.212160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:13.276 [2024-07-25 13:18:05.212172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:13.276 [2024-07-25 13:18:05.212184] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:13.276 [2024-07-25 13:18:05.212197] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:13.276 [2024-07-25 13:18:05.212215] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:13.276 [2024-07-25 13:18:05.212228] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:13.276 [2024-07-25 13:18:05.212239] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:13.276 [2024-07-25 13:18:05.212252] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:13.276 [2024-07-25 13:18:05.212265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.276 [2024-07-25 13:18:05.212277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:13.276 [2024-07-25 13:18:05.212289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.968 ms 00:22:13.276 [2024-07-25 13:18:05.212300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.276 [2024-07-25 13:18:05.253169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.276 [2024-07-25 13:18:05.253245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:13.276 [2024-07-25 13:18:05.253269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.802 ms 00:22:13.276 [2024-07-25 13:18:05.253281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.276 [2024-07-25 13:18:05.253410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.276 [2024-07-25 13:18:05.253429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:13.276 [2024-07-25 13:18:05.253442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:22:13.276 [2024-07-25 13:18:05.253453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.276 [2024-07-25 13:18:05.292153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.276 [2024-07-25 13:18:05.292223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:13.276 [2024-07-25 13:18:05.292245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.593 ms 00:22:13.276 [2024-07-25 13:18:05.292257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.276 [2024-07-25 13:18:05.292336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.276 [2024-07-25 13:18:05.292354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:13.276 [2024-07-25 13:18:05.292367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:13.276 [2024-07-25 13:18:05.292386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.276 [2024-07-25 13:18:05.292796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.276 [2024-07-25 13:18:05.292833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:13.276 [2024-07-25 13:18:05.292849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:22:13.276 [2024-07-25 13:18:05.292861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.276 [2024-07-25 13:18:05.293032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.276 [2024-07-25 13:18:05.293062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:13.276 [2024-07-25 13:18:05.293076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:22:13.276 [2024-07-25 13:18:05.293088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.276 [2024-07-25 13:18:05.309166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.276 [2024-07-25 13:18:05.309229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:13.276 [2024-07-25 13:18:05.309251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.026 ms 00:22:13.276 [2024-07-25 13:18:05.309269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.276 [2024-07-25 13:18:05.325587] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:22:13.276 [2024-07-25 13:18:05.325641] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:13.276 [2024-07-25 13:18:05.325661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.276 [2024-07-25 13:18:05.325674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:13.276 [2024-07-25 13:18:05.325689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.215 ms 00:22:13.276 [2024-07-25 13:18:05.325700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.276 [2024-07-25 13:18:05.355494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.276 [2024-07-25 13:18:05.355568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:13.276 [2024-07-25 13:18:05.355590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.729 ms 00:22:13.276 [2024-07-25 13:18:05.355602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.276 [2024-07-25 13:18:05.371458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.276 [2024-07-25 13:18:05.371512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:13.276 [2024-07-25 13:18:05.371531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.787 ms 00:22:13.276 [2024-07-25 13:18:05.371543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.276 [2024-07-25 13:18:05.387060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.276 [2024-07-25 13:18:05.387153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:13.276 [2024-07-25 13:18:05.387175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.461 ms 00:22:13.276 [2024-07-25 13:18:05.387187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.276 [2024-07-25 13:18:05.388012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.276 [2024-07-25 13:18:05.388052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:13.276 [2024-07-25 13:18:05.388068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.686 ms 00:22:13.276 [2024-07-25 13:18:05.388080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.276 [2024-07-25 13:18:05.461572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.276 [2024-07-25 13:18:05.461653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:13.276 [2024-07-25 13:18:05.461674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.437 ms 00:22:13.276 [2024-07-25 13:18:05.461696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.534 [2024-07-25 13:18:05.474552] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:13.534 [2024-07-25 13:18:05.477283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.534 [2024-07-25 13:18:05.477329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:13.534 [2024-07-25 13:18:05.477350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.503 ms 00:22:13.534 [2024-07-25 13:18:05.477362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.534 [2024-07-25 13:18:05.477494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.534 [2024-07-25 13:18:05.477515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:13.534 [2024-07-25 13:18:05.477530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:13.534 [2024-07-25 13:18:05.477542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.534 [2024-07-25 13:18:05.479113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.534 [2024-07-25 13:18:05.479160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:13.534 [2024-07-25 13:18:05.479178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.498 ms 00:22:13.534 [2024-07-25 13:18:05.479192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.534 [2024-07-25 13:18:05.479235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.534 [2024-07-25 13:18:05.479253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:13.534 [2024-07-25 13:18:05.479267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:13.534 [2024-07-25 13:18:05.479280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.534 [2024-07-25 13:18:05.479325] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:13.535 [2024-07-25 13:18:05.479343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.535 [2024-07-25 13:18:05.479360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:13.535 [2024-07-25 13:18:05.479378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:13.535 [2024-07-25 13:18:05.479391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.535 [2024-07-25 13:18:05.510463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.535 [2024-07-25 13:18:05.510520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:13.535 [2024-07-25 13:18:05.510540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.043 ms 00:22:13.535 [2024-07-25 13:18:05.510561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.535 [2024-07-25 13:18:05.510662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.535 [2024-07-25 13:18:05.510681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:13.535 [2024-07-25 13:18:05.510696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:22:13.535 [2024-07-25 13:18:05.510707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.535 [2024-07-25 13:18:05.519037] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 335.013 ms, result 0 00:22:51.869  Copying: 25/1024 [MB] (25 MBps) Copying: 52/1024 [MB] (27 MBps) Copying: 78/1024 [MB] (25 MBps) Copying: 105/1024 [MB] (26 MBps) Copying: 132/1024 [MB] (27 MBps) Copying: 159/1024 [MB] (26 MBps) Copying: 186/1024 [MB] (27 MBps) Copying: 212/1024 [MB] (26 MBps) Copying: 239/1024 [MB] (26 MBps) Copying: 265/1024 [MB] (25 MBps) Copying: 291/1024 [MB] (26 MBps) Copying: 318/1024 [MB] (26 MBps) Copying: 345/1024 [MB] (26 MBps) Copying: 370/1024 [MB] (25 MBps) Copying: 396/1024 [MB] (26 MBps) Copying: 425/1024 [MB] (28 MBps) Copying: 451/1024 [MB] (26 MBps) Copying: 476/1024 [MB] (25 MBps) Copying: 505/1024 [MB] (28 MBps) Copying: 531/1024 [MB] (26 MBps) Copying: 558/1024 [MB] (26 MBps) Copying: 584/1024 [MB] (26 MBps) Copying: 609/1024 [MB] (25 MBps) Copying: 635/1024 [MB] (25 MBps) Copying: 662/1024 [MB] (27 MBps) Copying: 690/1024 [MB] (27 MBps) Copying: 718/1024 [MB] (28 MBps) Copying: 746/1024 [MB] (28 MBps) Copying: 773/1024 [MB] (26 MBps) Copying: 802/1024 [MB] (29 MBps) Copying: 831/1024 [MB] (29 MBps) Copying: 855/1024 [MB] (23 MBps) Copying: 884/1024 [MB] (29 MBps) Copying: 913/1024 [MB] (28 MBps) Copying: 941/1024 [MB] (28 MBps) Copying: 968/1024 [MB] (26 MBps) Copying: 994/1024 [MB] (26 MBps) Copying: 1022/1024 [MB] (27 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-07-25 13:18:43.952488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.869 [2024-07-25 13:18:43.952573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:51.869 [2024-07-25 13:18:43.952596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:51.869 [2024-07-25 13:18:43.952608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.869 [2024-07-25 13:18:43.952662] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:51.869 [2024-07-25 13:18:43.956046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.869 [2024-07-25 13:18:43.956083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:51.869 [2024-07-25 13:18:43.956100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.361 ms 00:22:51.869 [2024-07-25 13:18:43.956123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.869 [2024-07-25 13:18:43.956371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.869 [2024-07-25 13:18:43.956390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:51.869 [2024-07-25 13:18:43.956403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:22:51.869 [2024-07-25 13:18:43.956414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.869 [2024-07-25 13:18:43.961636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.869 [2024-07-25 13:18:43.961689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:51.869 [2024-07-25 13:18:43.961708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.191 ms 00:22:51.869 [2024-07-25 13:18:43.961720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.869 [2024-07-25 13:18:43.968407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.869 [2024-07-25 13:18:43.968463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:51.869 [2024-07-25 13:18:43.968480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.641 ms 00:22:51.869 [2024-07-25 13:18:43.968491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.869 [2024-07-25 13:18:43.999899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.869 [2024-07-25 13:18:43.999966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:51.869 [2024-07-25 13:18:43.999987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.315 ms 00:22:51.869 [2024-07-25 13:18:43.999999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.869 [2024-07-25 13:18:44.018020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.869 [2024-07-25 13:18:44.018090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:51.869 [2024-07-25 13:18:44.018134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.956 ms 00:22:51.869 [2024-07-25 13:18:44.018147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.128 [2024-07-25 13:18:44.112674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.128 [2024-07-25 13:18:44.112790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:52.128 [2024-07-25 13:18:44.112814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.468 ms 00:22:52.128 [2024-07-25 13:18:44.112838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.128 [2024-07-25 13:18:44.144800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.128 [2024-07-25 13:18:44.144857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:52.128 [2024-07-25 13:18:44.144878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.934 ms 00:22:52.128 [2024-07-25 13:18:44.144890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.128 [2024-07-25 13:18:44.175885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.128 [2024-07-25 13:18:44.175932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:52.128 [2024-07-25 13:18:44.175951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.937 ms 00:22:52.128 [2024-07-25 13:18:44.175974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.128 [2024-07-25 13:18:44.207880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.128 [2024-07-25 13:18:44.207950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:52.128 [2024-07-25 13:18:44.207970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.851 ms 00:22:52.128 [2024-07-25 13:18:44.208006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.129 [2024-07-25 13:18:44.239724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.129 [2024-07-25 13:18:44.239803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:52.129 [2024-07-25 13:18:44.239826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.558 ms 00:22:52.129 [2024-07-25 13:18:44.239851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.129 [2024-07-25 13:18:44.239935] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:52.129 [2024-07-25 13:18:44.239962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133632 / 261120 wr_cnt: 1 state: open 00:22:52.129 [2024-07-25 13:18:44.239977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.239989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.240994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.241006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:52.129 [2024-07-25 13:18:44.241018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:52.130 [2024-07-25 13:18:44.241030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:52.130 [2024-07-25 13:18:44.241042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:52.130 [2024-07-25 13:18:44.241053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:52.130 [2024-07-25 13:18:44.241065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:52.130 [2024-07-25 13:18:44.241078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:52.130 [2024-07-25 13:18:44.241090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:52.130 [2024-07-25 13:18:44.241102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:52.130 [2024-07-25 13:18:44.241126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:52.130 [2024-07-25 13:18:44.241139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:52.130 [2024-07-25 13:18:44.241151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:52.130 [2024-07-25 13:18:44.241163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:52.130 [2024-07-25 13:18:44.241175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:52.130 [2024-07-25 13:18:44.241187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:52.130 [2024-07-25 13:18:44.241199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:52.130 [2024-07-25 13:18:44.241211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:52.130 [2024-07-25 13:18:44.241223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:52.130 [2024-07-25 13:18:44.241245] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:52.130 [2024-07-25 13:18:44.241256] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fc70e7c4-e8c8-4636-bdce-59f0dc978c04 00:22:52.130 [2024-07-25 13:18:44.241268] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133632 00:22:52.130 [2024-07-25 13:18:44.241279] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 13760 00:22:52.130 [2024-07-25 13:18:44.241290] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 12800 00:22:52.130 [2024-07-25 13:18:44.241312] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0750 00:22:52.130 [2024-07-25 13:18:44.241323] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:52.130 [2024-07-25 13:18:44.241335] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:52.130 [2024-07-25 13:18:44.241351] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:52.130 [2024-07-25 13:18:44.241362] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:52.130 [2024-07-25 13:18:44.241372] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:52.130 [2024-07-25 13:18:44.241383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.130 [2024-07-25 13:18:44.241395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:52.130 [2024-07-25 13:18:44.241406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.451 ms 00:22:52.130 [2024-07-25 13:18:44.241417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.130 [2024-07-25 13:18:44.257985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.130 [2024-07-25 13:18:44.258036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:52.130 [2024-07-25 13:18:44.258056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.511 ms 00:22:52.130 [2024-07-25 13:18:44.258084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.130 [2024-07-25 13:18:44.258560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.130 [2024-07-25 13:18:44.258596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:52.130 [2024-07-25 13:18:44.258611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:22:52.130 [2024-07-25 13:18:44.258623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.130 [2024-07-25 13:18:44.295480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.130 [2024-07-25 13:18:44.295545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:52.130 [2024-07-25 13:18:44.295568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.130 [2024-07-25 13:18:44.295580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.130 [2024-07-25 13:18:44.295665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.130 [2024-07-25 13:18:44.295681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:52.130 [2024-07-25 13:18:44.295693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.130 [2024-07-25 13:18:44.295704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.130 [2024-07-25 13:18:44.295821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.130 [2024-07-25 13:18:44.295842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:52.130 [2024-07-25 13:18:44.295855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.130 [2024-07-25 13:18:44.295873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.130 [2024-07-25 13:18:44.295896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.130 [2024-07-25 13:18:44.295910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:52.130 [2024-07-25 13:18:44.295922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.130 [2024-07-25 13:18:44.295933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.389 [2024-07-25 13:18:44.394581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.389 [2024-07-25 13:18:44.394653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:52.389 [2024-07-25 13:18:44.394673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.389 [2024-07-25 13:18:44.394694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.389 [2024-07-25 13:18:44.478935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.389 [2024-07-25 13:18:44.479001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:52.389 [2024-07-25 13:18:44.479021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.389 [2024-07-25 13:18:44.479033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.389 [2024-07-25 13:18:44.479134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.389 [2024-07-25 13:18:44.479155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:52.389 [2024-07-25 13:18:44.479168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.389 [2024-07-25 13:18:44.479179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.389 [2024-07-25 13:18:44.479257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.389 [2024-07-25 13:18:44.479274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:52.389 [2024-07-25 13:18:44.479286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.389 [2024-07-25 13:18:44.479297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.389 [2024-07-25 13:18:44.479420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.389 [2024-07-25 13:18:44.479439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:52.389 [2024-07-25 13:18:44.479452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.389 [2024-07-25 13:18:44.479462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.389 [2024-07-25 13:18:44.479513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.389 [2024-07-25 13:18:44.479539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:52.389 [2024-07-25 13:18:44.479558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.389 [2024-07-25 13:18:44.479570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.389 [2024-07-25 13:18:44.479615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.389 [2024-07-25 13:18:44.479629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:52.389 [2024-07-25 13:18:44.479642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.389 [2024-07-25 13:18:44.479652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.389 [2024-07-25 13:18:44.479708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.389 [2024-07-25 13:18:44.479725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:52.389 [2024-07-25 13:18:44.479737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.389 [2024-07-25 13:18:44.479748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.389 [2024-07-25 13:18:44.479885] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 527.370 ms, result 0 00:22:53.765 00:22:53.765 00:22:53.765 13:18:45 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:55.665 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:55.665 13:18:47 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:55.665 13:18:47 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:22:55.665 13:18:47 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:55.925 13:18:47 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:55.925 13:18:47 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:55.925 13:18:47 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 80038 00:22:55.925 13:18:47 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 80038 ']' 00:22:55.925 13:18:47 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 80038 00:22:55.925 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (80038) - No such process 00:22:55.925 Process with pid 80038 is not found 00:22:55.925 13:18:47 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 80038 is not found' 00:22:55.925 13:18:47 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:22:55.925 Remove shared memory files 00:22:55.925 13:18:47 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:55.925 13:18:47 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:22:55.925 13:18:47 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:22:55.925 13:18:47 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:22:55.925 13:18:47 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:55.925 13:18:47 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:22:55.925 00:22:55.925 real 3m11.876s 00:22:55.925 user 2m57.867s 00:22:55.925 sys 0m16.080s 00:22:55.925 13:18:47 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:55.925 13:18:47 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:55.925 ************************************ 00:22:55.925 END TEST ftl_restore 00:22:55.925 ************************************ 00:22:55.925 13:18:47 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:55.925 13:18:47 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:22:55.925 13:18:47 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:55.925 13:18:47 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:55.925 ************************************ 00:22:55.925 START TEST ftl_dirty_shutdown 00:22:55.925 ************************************ 00:22:55.925 13:18:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:55.925 * Looking for test storage... 00:22:55.925 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=82045 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 82045 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 82045 ']' 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:55.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:55.925 13:18:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:56.184 [2024-07-25 13:18:48.212273] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:56.184 [2024-07-25 13:18:48.212454] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82045 ] 00:22:56.442 [2024-07-25 13:18:48.375173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.442 [2024-07-25 13:18:48.599009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.376 13:18:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:57.376 13:18:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:22:57.376 13:18:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:57.376 13:18:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:22:57.376 13:18:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:57.376 13:18:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:22:57.376 13:18:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:22:57.376 13:18:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:57.633 13:18:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:57.633 13:18:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:22:57.633 13:18:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:57.633 13:18:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:22:57.633 13:18:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:57.633 13:18:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:57.634 13:18:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:57.634 13:18:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:57.892 13:18:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:57.892 { 00:22:57.892 "name": "nvme0n1", 00:22:57.892 "aliases": [ 00:22:57.892 "12b98ce6-a483-41e2-96b6-6061db900b44" 00:22:57.892 ], 00:22:57.892 "product_name": "NVMe disk", 00:22:57.892 "block_size": 4096, 00:22:57.892 "num_blocks": 1310720, 00:22:57.892 "uuid": "12b98ce6-a483-41e2-96b6-6061db900b44", 00:22:57.892 "assigned_rate_limits": { 00:22:57.892 "rw_ios_per_sec": 0, 00:22:57.892 "rw_mbytes_per_sec": 0, 00:22:57.892 "r_mbytes_per_sec": 0, 00:22:57.892 "w_mbytes_per_sec": 0 00:22:57.892 }, 00:22:57.892 "claimed": true, 00:22:57.893 "claim_type": "read_many_write_one", 00:22:57.893 "zoned": false, 00:22:57.893 "supported_io_types": { 00:22:57.893 "read": true, 00:22:57.893 "write": true, 00:22:57.893 "unmap": true, 00:22:57.893 "flush": true, 00:22:57.893 "reset": true, 00:22:57.893 "nvme_admin": true, 00:22:57.893 "nvme_io": true, 00:22:57.893 "nvme_io_md": false, 00:22:57.893 "write_zeroes": true, 00:22:57.893 "zcopy": false, 00:22:57.893 "get_zone_info": false, 00:22:57.893 "zone_management": false, 00:22:57.893 "zone_append": false, 00:22:57.893 "compare": true, 00:22:57.893 "compare_and_write": false, 00:22:57.893 "abort": true, 00:22:57.893 "seek_hole": false, 00:22:57.893 "seek_data": false, 00:22:57.893 "copy": true, 00:22:57.893 "nvme_iov_md": false 00:22:57.893 }, 00:22:57.893 "driver_specific": { 00:22:57.893 "nvme": [ 00:22:57.893 { 00:22:57.893 "pci_address": "0000:00:11.0", 00:22:57.893 "trid": { 00:22:57.893 "trtype": "PCIe", 00:22:57.893 "traddr": "0000:00:11.0" 00:22:57.893 }, 00:22:57.893 "ctrlr_data": { 00:22:57.893 "cntlid": 0, 00:22:57.893 "vendor_id": "0x1b36", 00:22:57.893 "model_number": "QEMU NVMe Ctrl", 00:22:57.893 "serial_number": "12341", 00:22:57.893 "firmware_revision": "8.0.0", 00:22:57.893 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:57.893 "oacs": { 00:22:57.893 "security": 0, 00:22:57.893 "format": 1, 00:22:57.893 "firmware": 0, 00:22:57.893 "ns_manage": 1 00:22:57.893 }, 00:22:57.893 "multi_ctrlr": false, 00:22:57.893 "ana_reporting": false 00:22:57.893 }, 00:22:57.893 "vs": { 00:22:57.893 "nvme_version": "1.4" 00:22:57.893 }, 00:22:57.893 "ns_data": { 00:22:57.893 "id": 1, 00:22:57.893 "can_share": false 00:22:57.893 } 00:22:57.893 } 00:22:57.893 ], 00:22:57.893 "mp_policy": "active_passive" 00:22:57.893 } 00:22:57.893 } 00:22:57.893 ]' 00:22:57.893 13:18:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:57.893 13:18:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:57.893 13:18:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:57.893 13:18:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:22:57.893 13:18:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:22:57.893 13:18:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:22:57.893 13:18:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:22:57.893 13:18:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:57.893 13:18:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:22:57.893 13:18:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:57.893 13:18:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:58.151 13:18:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=c61b302e-3518-4b5c-876f-eda43431b89a 00:22:58.151 13:18:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:22:58.151 13:18:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c61b302e-3518-4b5c-876f-eda43431b89a 00:22:58.408 13:18:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:58.666 13:18:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=a91feac6-3a51-4201-b2e9-28572dc6d0ff 00:22:58.666 13:18:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a91feac6-3a51-4201-b2e9-28572dc6d0ff 00:22:58.924 13:18:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=f6a96e36-d246-4e0a-a28c-65e1f5910b6b 00:22:58.924 13:18:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:22:58.924 13:18:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 f6a96e36-d246-4e0a-a28c-65e1f5910b6b 00:22:58.924 13:18:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:22:58.924 13:18:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:58.924 13:18:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=f6a96e36-d246-4e0a-a28c-65e1f5910b6b 00:22:58.924 13:18:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:22:58.924 13:18:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size f6a96e36-d246-4e0a-a28c-65e1f5910b6b 00:22:58.924 13:18:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=f6a96e36-d246-4e0a-a28c-65e1f5910b6b 00:22:58.924 13:18:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:58.924 13:18:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:58.924 13:18:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:58.924 13:18:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f6a96e36-d246-4e0a-a28c-65e1f5910b6b 00:22:59.491 13:18:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:59.491 { 00:22:59.491 "name": "f6a96e36-d246-4e0a-a28c-65e1f5910b6b", 00:22:59.491 "aliases": [ 00:22:59.491 "lvs/nvme0n1p0" 00:22:59.491 ], 00:22:59.491 "product_name": "Logical Volume", 00:22:59.491 "block_size": 4096, 00:22:59.491 "num_blocks": 26476544, 00:22:59.491 "uuid": "f6a96e36-d246-4e0a-a28c-65e1f5910b6b", 00:22:59.491 "assigned_rate_limits": { 00:22:59.491 "rw_ios_per_sec": 0, 00:22:59.491 "rw_mbytes_per_sec": 0, 00:22:59.491 "r_mbytes_per_sec": 0, 00:22:59.491 "w_mbytes_per_sec": 0 00:22:59.491 }, 00:22:59.491 "claimed": false, 00:22:59.491 "zoned": false, 00:22:59.491 "supported_io_types": { 00:22:59.491 "read": true, 00:22:59.491 "write": true, 00:22:59.491 "unmap": true, 00:22:59.491 "flush": false, 00:22:59.491 "reset": true, 00:22:59.491 "nvme_admin": false, 00:22:59.491 "nvme_io": false, 00:22:59.491 "nvme_io_md": false, 00:22:59.491 "write_zeroes": true, 00:22:59.491 "zcopy": false, 00:22:59.491 "get_zone_info": false, 00:22:59.491 "zone_management": false, 00:22:59.491 "zone_append": false, 00:22:59.491 "compare": false, 00:22:59.491 "compare_and_write": false, 00:22:59.491 "abort": false, 00:22:59.491 "seek_hole": true, 00:22:59.491 "seek_data": true, 00:22:59.491 "copy": false, 00:22:59.491 "nvme_iov_md": false 00:22:59.491 }, 00:22:59.491 "driver_specific": { 00:22:59.491 "lvol": { 00:22:59.491 "lvol_store_uuid": "a91feac6-3a51-4201-b2e9-28572dc6d0ff", 00:22:59.491 "base_bdev": "nvme0n1", 00:22:59.491 "thin_provision": true, 00:22:59.491 "num_allocated_clusters": 0, 00:22:59.491 "snapshot": false, 00:22:59.491 "clone": false, 00:22:59.491 "esnap_clone": false 00:22:59.491 } 00:22:59.491 } 00:22:59.491 } 00:22:59.491 ]' 00:22:59.491 13:18:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:59.491 13:18:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:59.491 13:18:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:59.491 13:18:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:59.491 13:18:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:59.491 13:18:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:22:59.491 13:18:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:22:59.491 13:18:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:22:59.491 13:18:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:59.749 13:18:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:59.749 13:18:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:59.749 13:18:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size f6a96e36-d246-4e0a-a28c-65e1f5910b6b 00:22:59.749 13:18:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=f6a96e36-d246-4e0a-a28c-65e1f5910b6b 00:22:59.749 13:18:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:59.749 13:18:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:59.749 13:18:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:59.749 13:18:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f6a96e36-d246-4e0a-a28c-65e1f5910b6b 00:23:00.007 13:18:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:00.007 { 00:23:00.007 "name": "f6a96e36-d246-4e0a-a28c-65e1f5910b6b", 00:23:00.007 "aliases": [ 00:23:00.007 "lvs/nvme0n1p0" 00:23:00.007 ], 00:23:00.007 "product_name": "Logical Volume", 00:23:00.007 "block_size": 4096, 00:23:00.007 "num_blocks": 26476544, 00:23:00.007 "uuid": "f6a96e36-d246-4e0a-a28c-65e1f5910b6b", 00:23:00.007 "assigned_rate_limits": { 00:23:00.007 "rw_ios_per_sec": 0, 00:23:00.007 "rw_mbytes_per_sec": 0, 00:23:00.007 "r_mbytes_per_sec": 0, 00:23:00.007 "w_mbytes_per_sec": 0 00:23:00.007 }, 00:23:00.007 "claimed": false, 00:23:00.007 "zoned": false, 00:23:00.007 "supported_io_types": { 00:23:00.007 "read": true, 00:23:00.007 "write": true, 00:23:00.007 "unmap": true, 00:23:00.007 "flush": false, 00:23:00.007 "reset": true, 00:23:00.007 "nvme_admin": false, 00:23:00.007 "nvme_io": false, 00:23:00.007 "nvme_io_md": false, 00:23:00.007 "write_zeroes": true, 00:23:00.007 "zcopy": false, 00:23:00.007 "get_zone_info": false, 00:23:00.007 "zone_management": false, 00:23:00.007 "zone_append": false, 00:23:00.007 "compare": false, 00:23:00.007 "compare_and_write": false, 00:23:00.007 "abort": false, 00:23:00.007 "seek_hole": true, 00:23:00.007 "seek_data": true, 00:23:00.007 "copy": false, 00:23:00.007 "nvme_iov_md": false 00:23:00.007 }, 00:23:00.007 "driver_specific": { 00:23:00.007 "lvol": { 00:23:00.007 "lvol_store_uuid": "a91feac6-3a51-4201-b2e9-28572dc6d0ff", 00:23:00.007 "base_bdev": "nvme0n1", 00:23:00.007 "thin_provision": true, 00:23:00.007 "num_allocated_clusters": 0, 00:23:00.007 "snapshot": false, 00:23:00.007 "clone": false, 00:23:00.007 "esnap_clone": false 00:23:00.007 } 00:23:00.007 } 00:23:00.007 } 00:23:00.007 ]' 00:23:00.007 13:18:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:00.007 13:18:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:00.007 13:18:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:00.264 13:18:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:00.264 13:18:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:00.264 13:18:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:23:00.264 13:18:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:23:00.265 13:18:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:00.265 13:18:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:23:00.265 13:18:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size f6a96e36-d246-4e0a-a28c-65e1f5910b6b 00:23:00.265 13:18:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=f6a96e36-d246-4e0a-a28c-65e1f5910b6b 00:23:00.265 13:18:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:00.265 13:18:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:00.265 13:18:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:00.265 13:18:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f6a96e36-d246-4e0a-a28c-65e1f5910b6b 00:23:00.523 13:18:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:00.523 { 00:23:00.523 "name": "f6a96e36-d246-4e0a-a28c-65e1f5910b6b", 00:23:00.523 "aliases": [ 00:23:00.523 "lvs/nvme0n1p0" 00:23:00.523 ], 00:23:00.523 "product_name": "Logical Volume", 00:23:00.523 "block_size": 4096, 00:23:00.523 "num_blocks": 26476544, 00:23:00.523 "uuid": "f6a96e36-d246-4e0a-a28c-65e1f5910b6b", 00:23:00.523 "assigned_rate_limits": { 00:23:00.523 "rw_ios_per_sec": 0, 00:23:00.523 "rw_mbytes_per_sec": 0, 00:23:00.523 "r_mbytes_per_sec": 0, 00:23:00.523 "w_mbytes_per_sec": 0 00:23:00.523 }, 00:23:00.523 "claimed": false, 00:23:00.523 "zoned": false, 00:23:00.523 "supported_io_types": { 00:23:00.523 "read": true, 00:23:00.523 "write": true, 00:23:00.523 "unmap": true, 00:23:00.523 "flush": false, 00:23:00.523 "reset": true, 00:23:00.523 "nvme_admin": false, 00:23:00.523 "nvme_io": false, 00:23:00.523 "nvme_io_md": false, 00:23:00.523 "write_zeroes": true, 00:23:00.523 "zcopy": false, 00:23:00.523 "get_zone_info": false, 00:23:00.523 "zone_management": false, 00:23:00.523 "zone_append": false, 00:23:00.523 "compare": false, 00:23:00.523 "compare_and_write": false, 00:23:00.523 "abort": false, 00:23:00.523 "seek_hole": true, 00:23:00.523 "seek_data": true, 00:23:00.523 "copy": false, 00:23:00.523 "nvme_iov_md": false 00:23:00.523 }, 00:23:00.523 "driver_specific": { 00:23:00.523 "lvol": { 00:23:00.523 "lvol_store_uuid": "a91feac6-3a51-4201-b2e9-28572dc6d0ff", 00:23:00.523 "base_bdev": "nvme0n1", 00:23:00.523 "thin_provision": true, 00:23:00.523 "num_allocated_clusters": 0, 00:23:00.523 "snapshot": false, 00:23:00.523 "clone": false, 00:23:00.523 "esnap_clone": false 00:23:00.523 } 00:23:00.523 } 00:23:00.523 } 00:23:00.523 ]' 00:23:00.523 13:18:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:00.781 13:18:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:00.781 13:18:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:00.781 13:18:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:00.781 13:18:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:00.781 13:18:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:23:00.781 13:18:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:23:00.781 13:18:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d f6a96e36-d246-4e0a-a28c-65e1f5910b6b --l2p_dram_limit 10' 00:23:00.781 13:18:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:23:00.781 13:18:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:23:00.781 13:18:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:00.781 13:18:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f6a96e36-d246-4e0a-a28c-65e1f5910b6b --l2p_dram_limit 10 -c nvc0n1p0 00:23:01.041 [2024-07-25 13:18:53.002078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.041 [2024-07-25 13:18:53.002181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:01.041 [2024-07-25 13:18:53.002216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:01.041 [2024-07-25 13:18:53.002241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.041 [2024-07-25 13:18:53.002369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.041 [2024-07-25 13:18:53.002407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:01.041 [2024-07-25 13:18:53.002444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:23:01.041 [2024-07-25 13:18:53.002470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.041 [2024-07-25 13:18:53.002525] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:01.041 [2024-07-25 13:18:53.003552] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:01.041 [2024-07-25 13:18:53.003605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.041 [2024-07-25 13:18:53.003641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:01.041 [2024-07-25 13:18:53.003666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.092 ms 00:23:01.041 [2024-07-25 13:18:53.003692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.041 [2024-07-25 13:18:53.003885] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 766e89cb-ea00-4368-85a7-fe9fb3737ad0 00:23:01.041 [2024-07-25 13:18:53.005082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.041 [2024-07-25 13:18:53.005151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:01.041 [2024-07-25 13:18:53.005191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:23:01.041 [2024-07-25 13:18:53.005216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.041 [2024-07-25 13:18:53.010224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.041 [2024-07-25 13:18:53.010307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:01.041 [2024-07-25 13:18:53.010346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.901 ms 00:23:01.041 [2024-07-25 13:18:53.010369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.041 [2024-07-25 13:18:53.010593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.041 [2024-07-25 13:18:53.010636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:01.041 [2024-07-25 13:18:53.010671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:23:01.041 [2024-07-25 13:18:53.010697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.041 [2024-07-25 13:18:53.010863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.041 [2024-07-25 13:18:53.010907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:01.041 [2024-07-25 13:18:53.010948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:23:01.041 [2024-07-25 13:18:53.010978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.041 [2024-07-25 13:18:53.011055] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:01.041 [2024-07-25 13:18:53.015832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.041 [2024-07-25 13:18:53.015935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:01.041 [2024-07-25 13:18:53.015965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.808 ms 00:23:01.042 [2024-07-25 13:18:53.015989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.042 [2024-07-25 13:18:53.016089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.042 [2024-07-25 13:18:53.016144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:01.042 [2024-07-25 13:18:53.016170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:23:01.042 [2024-07-25 13:18:53.016195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.042 [2024-07-25 13:18:53.016327] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:01.042 [2024-07-25 13:18:53.016566] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:01.042 [2024-07-25 13:18:53.016620] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:01.042 [2024-07-25 13:18:53.016666] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:01.042 [2024-07-25 13:18:53.016697] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:01.042 [2024-07-25 13:18:53.016731] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:01.042 [2024-07-25 13:18:53.016755] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:01.042 [2024-07-25 13:18:53.016792] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:01.042 [2024-07-25 13:18:53.016816] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:01.042 [2024-07-25 13:18:53.016842] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:01.042 [2024-07-25 13:18:53.016866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.042 [2024-07-25 13:18:53.016893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:01.042 [2024-07-25 13:18:53.016919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:23:01.042 [2024-07-25 13:18:53.016945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.042 [2024-07-25 13:18:53.017094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.042 [2024-07-25 13:18:53.017188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:01.042 [2024-07-25 13:18:53.017217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:23:01.042 [2024-07-25 13:18:53.017250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.042 [2024-07-25 13:18:53.017412] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:01.042 [2024-07-25 13:18:53.017471] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:01.042 [2024-07-25 13:18:53.017517] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:01.042 [2024-07-25 13:18:53.017549] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:01.042 [2024-07-25 13:18:53.017573] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:01.042 [2024-07-25 13:18:53.017602] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:01.042 [2024-07-25 13:18:53.017624] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:01.042 [2024-07-25 13:18:53.017651] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:01.042 [2024-07-25 13:18:53.017675] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:01.042 [2024-07-25 13:18:53.017702] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:01.042 [2024-07-25 13:18:53.017725] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:01.042 [2024-07-25 13:18:53.017755] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:01.042 [2024-07-25 13:18:53.017778] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:01.042 [2024-07-25 13:18:53.017803] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:01.042 [2024-07-25 13:18:53.017832] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:01.042 [2024-07-25 13:18:53.017860] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:01.042 [2024-07-25 13:18:53.017879] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:01.042 [2024-07-25 13:18:53.017908] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:01.042 [2024-07-25 13:18:53.017930] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:01.042 [2024-07-25 13:18:53.017956] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:01.042 [2024-07-25 13:18:53.017979] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:01.042 [2024-07-25 13:18:53.018004] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:01.042 [2024-07-25 13:18:53.018027] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:01.042 [2024-07-25 13:18:53.018050] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:01.042 [2024-07-25 13:18:53.018072] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:01.042 [2024-07-25 13:18:53.018099] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:01.042 [2024-07-25 13:18:53.018147] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:01.042 [2024-07-25 13:18:53.018176] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:01.042 [2024-07-25 13:18:53.018204] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:01.042 [2024-07-25 13:18:53.018232] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:01.042 [2024-07-25 13:18:53.018249] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:01.042 [2024-07-25 13:18:53.018269] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:01.042 [2024-07-25 13:18:53.018291] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:01.042 [2024-07-25 13:18:53.018319] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:01.042 [2024-07-25 13:18:53.018342] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:01.042 [2024-07-25 13:18:53.018373] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:01.042 [2024-07-25 13:18:53.018396] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:01.042 [2024-07-25 13:18:53.018423] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:01.042 [2024-07-25 13:18:53.018444] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:01.042 [2024-07-25 13:18:53.018469] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:01.042 [2024-07-25 13:18:53.018491] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:01.042 [2024-07-25 13:18:53.018518] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:01.042 [2024-07-25 13:18:53.018540] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:01.042 [2024-07-25 13:18:53.018565] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:01.042 [2024-07-25 13:18:53.018587] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:01.042 [2024-07-25 13:18:53.018615] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:01.042 [2024-07-25 13:18:53.018640] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:01.042 [2024-07-25 13:18:53.018668] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:01.042 [2024-07-25 13:18:53.018697] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:01.042 [2024-07-25 13:18:53.018729] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:01.042 [2024-07-25 13:18:53.018751] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:01.042 [2024-07-25 13:18:53.018777] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:01.042 [2024-07-25 13:18:53.018800] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:01.042 [2024-07-25 13:18:53.018834] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:01.042 [2024-07-25 13:18:53.018866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:01.042 [2024-07-25 13:18:53.018897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:01.042 [2024-07-25 13:18:53.018921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:01.042 [2024-07-25 13:18:53.018947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:01.042 [2024-07-25 13:18:53.018972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:01.042 [2024-07-25 13:18:53.019002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:01.042 [2024-07-25 13:18:53.019026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:01.042 [2024-07-25 13:18:53.019054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:01.042 [2024-07-25 13:18:53.019079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:01.042 [2024-07-25 13:18:53.019121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:01.042 [2024-07-25 13:18:53.019143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:01.042 [2024-07-25 13:18:53.019166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:01.042 [2024-07-25 13:18:53.019186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:01.042 [2024-07-25 13:18:53.019212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:01.042 [2024-07-25 13:18:53.019237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:01.042 [2024-07-25 13:18:53.019264] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:01.042 [2024-07-25 13:18:53.019289] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:01.042 [2024-07-25 13:18:53.019318] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:01.042 [2024-07-25 13:18:53.019340] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:01.042 [2024-07-25 13:18:53.019367] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:01.043 [2024-07-25 13:18:53.019392] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:01.043 [2024-07-25 13:18:53.019423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.043 [2024-07-25 13:18:53.019448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:01.043 [2024-07-25 13:18:53.019477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.085 ms 00:23:01.043 [2024-07-25 13:18:53.019500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.043 [2024-07-25 13:18:53.019594] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:01.043 [2024-07-25 13:18:53.019630] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:02.939 [2024-07-25 13:18:54.969630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.939 [2024-07-25 13:18:54.969707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:02.939 [2024-07-25 13:18:54.969737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1950.040 ms 00:23:02.939 [2024-07-25 13:18:54.969761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.939 [2024-07-25 13:18:55.002699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.939 [2024-07-25 13:18:55.002768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:02.939 [2024-07-25 13:18:55.002795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.566 ms 00:23:02.939 [2024-07-25 13:18:55.002809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.939 [2024-07-25 13:18:55.003009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.939 [2024-07-25 13:18:55.003031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:02.939 [2024-07-25 13:18:55.003053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:23:02.939 [2024-07-25 13:18:55.003065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.939 [2024-07-25 13:18:55.042376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.939 [2024-07-25 13:18:55.042441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:02.939 [2024-07-25 13:18:55.042466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.186 ms 00:23:02.939 [2024-07-25 13:18:55.042480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.939 [2024-07-25 13:18:55.042555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.939 [2024-07-25 13:18:55.042572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:02.939 [2024-07-25 13:18:55.042594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:02.939 [2024-07-25 13:18:55.042606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.939 [2024-07-25 13:18:55.043149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.939 [2024-07-25 13:18:55.043182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:02.939 [2024-07-25 13:18:55.043206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:23:02.939 [2024-07-25 13:18:55.043219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.939 [2024-07-25 13:18:55.043424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.939 [2024-07-25 13:18:55.043474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:02.939 [2024-07-25 13:18:55.043523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:23:02.939 [2024-07-25 13:18:55.043549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.939 [2024-07-25 13:18:55.061320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.939 [2024-07-25 13:18:55.061385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:02.939 [2024-07-25 13:18:55.061409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.720 ms 00:23:02.939 [2024-07-25 13:18:55.061424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.939 [2024-07-25 13:18:55.075219] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:02.939 [2024-07-25 13:18:55.078049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.939 [2024-07-25 13:18:55.078099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:02.939 [2024-07-25 13:18:55.078145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.476 ms 00:23:02.939 [2024-07-25 13:18:55.078162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.198 [2024-07-25 13:18:55.146160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.198 [2024-07-25 13:18:55.146259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:03.198 [2024-07-25 13:18:55.146283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.934 ms 00:23:03.198 [2024-07-25 13:18:55.146299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.198 [2024-07-25 13:18:55.146568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.198 [2024-07-25 13:18:55.146633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:03.198 [2024-07-25 13:18:55.146666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:23:03.198 [2024-07-25 13:18:55.146696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.198 [2024-07-25 13:18:55.179447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.198 [2024-07-25 13:18:55.179525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:03.198 [2024-07-25 13:18:55.179548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.610 ms 00:23:03.198 [2024-07-25 13:18:55.179570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.198 [2024-07-25 13:18:55.212532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.198 [2024-07-25 13:18:55.212649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:03.198 [2024-07-25 13:18:55.212672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.892 ms 00:23:03.198 [2024-07-25 13:18:55.212686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.198 [2024-07-25 13:18:55.213628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.198 [2024-07-25 13:18:55.213703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:03.198 [2024-07-25 13:18:55.213740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.852 ms 00:23:03.198 [2024-07-25 13:18:55.213755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.198 [2024-07-25 13:18:55.304197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.198 [2024-07-25 13:18:55.304293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:03.198 [2024-07-25 13:18:55.304316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.331 ms 00:23:03.198 [2024-07-25 13:18:55.304336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.198 [2024-07-25 13:18:55.338252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.198 [2024-07-25 13:18:55.338347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:03.198 [2024-07-25 13:18:55.338371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.834 ms 00:23:03.198 [2024-07-25 13:18:55.338402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.198 [2024-07-25 13:18:55.373201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.198 [2024-07-25 13:18:55.373292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:03.198 [2024-07-25 13:18:55.373315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.686 ms 00:23:03.198 [2024-07-25 13:18:55.373330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.457 [2024-07-25 13:18:55.407589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.457 [2024-07-25 13:18:55.407690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:03.457 [2024-07-25 13:18:55.407714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.161 ms 00:23:03.457 [2024-07-25 13:18:55.407730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.457 [2024-07-25 13:18:55.407845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.457 [2024-07-25 13:18:55.407871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:03.457 [2024-07-25 13:18:55.407886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:03.457 [2024-07-25 13:18:55.407912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.457 [2024-07-25 13:18:55.408267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.457 [2024-07-25 13:18:55.408340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:03.457 [2024-07-25 13:18:55.408376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:23:03.457 [2024-07-25 13:18:55.408406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.457 [2024-07-25 13:18:55.409766] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2407.172 ms, result 0 00:23:03.457 { 00:23:03.457 "name": "ftl0", 00:23:03.457 "uuid": "766e89cb-ea00-4368-85a7-fe9fb3737ad0" 00:23:03.457 } 00:23:03.457 13:18:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:23:03.457 13:18:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:03.715 13:18:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:23:03.715 13:18:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:23:03.715 13:18:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:23:03.974 /dev/nbd0 00:23:03.974 13:18:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:23:03.974 13:18:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:03.974 13:18:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:23:03.974 13:18:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:03.974 13:18:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:03.974 13:18:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:03.974 13:18:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:23:03.974 13:18:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:03.974 13:18:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:03.974 13:18:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:23:03.974 1+0 records in 00:23:03.974 1+0 records out 00:23:03.974 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000557036 s, 7.4 MB/s 00:23:03.974 13:18:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:03.974 13:18:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:23:03.974 13:18:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:03.974 13:18:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:03.974 13:18:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:23:03.974 13:18:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:23:03.974 [2024-07-25 13:18:56.079139] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:03.974 [2024-07-25 13:18:56.079283] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82182 ] 00:23:04.232 [2024-07-25 13:18:56.245047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.490 [2024-07-25 13:18:56.464805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.227  Copying: 165/1024 [MB] (165 MBps) Copying: 337/1024 [MB] (172 MBps) Copying: 509/1024 [MB] (172 MBps) Copying: 677/1024 [MB] (167 MBps) Copying: 840/1024 [MB] (163 MBps) Copying: 991/1024 [MB] (150 MBps) Copying: 1024/1024 [MB] (average 164 MBps) 00:23:12.227 00:23:12.227 13:19:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:14.765 13:19:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:23:14.765 [2024-07-25 13:19:06.447537] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:14.765 [2024-07-25 13:19:06.447703] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82287 ] 00:23:14.765 [2024-07-25 13:19:06.623670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.765 [2024-07-25 13:19:06.854435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.682  Copying: 16/1024 [MB] (16 MBps) Copying: 30/1024 [MB] (13 MBps) Copying: 47/1024 [MB] (16 MBps) Copying: 63/1024 [MB] (16 MBps) Copying: 81/1024 [MB] (17 MBps) Copying: 98/1024 [MB] (17 MBps) Copying: 115/1024 [MB] (17 MBps) Copying: 132/1024 [MB] (16 MBps) Copying: 149/1024 [MB] (17 MBps) Copying: 167/1024 [MB] (18 MBps) Copying: 184/1024 [MB] (16 MBps) Copying: 202/1024 [MB] (17 MBps) Copying: 220/1024 [MB] (18 MBps) Copying: 237/1024 [MB] (17 MBps) Copying: 254/1024 [MB] (17 MBps) Copying: 272/1024 [MB] (17 MBps) Copying: 288/1024 [MB] (16 MBps) Copying: 306/1024 [MB] (17 MBps) Copying: 324/1024 [MB] (17 MBps) Copying: 341/1024 [MB] (17 MBps) Copying: 358/1024 [MB] (16 MBps) Copying: 374/1024 [MB] (16 MBps) Copying: 391/1024 [MB] (17 MBps) Copying: 408/1024 [MB] (17 MBps) Copying: 424/1024 [MB] (15 MBps) Copying: 441/1024 [MB] (17 MBps) Copying: 459/1024 [MB] (17 MBps) Copying: 476/1024 [MB] (17 MBps) Copying: 494/1024 [MB] (18 MBps) Copying: 513/1024 [MB] (18 MBps) Copying: 530/1024 [MB] (16 MBps) Copying: 545/1024 [MB] (15 MBps) Copying: 562/1024 [MB] (16 MBps) Copying: 579/1024 [MB] (16 MBps) Copying: 596/1024 [MB] (17 MBps) Copying: 614/1024 [MB] (18 MBps) Copying: 632/1024 [MB] (17 MBps) Copying: 650/1024 [MB] (17 MBps) Copying: 666/1024 [MB] (16 MBps) Copying: 684/1024 [MB] (17 MBps) Copying: 702/1024 [MB] (17 MBps) Copying: 717/1024 [MB] (15 MBps) Copying: 734/1024 [MB] (16 MBps) Copying: 751/1024 [MB] (17 MBps) Copying: 768/1024 [MB] (16 MBps) Copying: 786/1024 [MB] (17 MBps) Copying: 803/1024 [MB] (17 MBps) Copying: 821/1024 [MB] (18 MBps) Copying: 839/1024 [MB] (17 MBps) Copying: 856/1024 [MB] (17 MBps) Copying: 874/1024 [MB] (17 MBps) Copying: 889/1024 [MB] (15 MBps) Copying: 906/1024 [MB] (16 MBps) Copying: 921/1024 [MB] (15 MBps) Copying: 937/1024 [MB] (16 MBps) Copying: 953/1024 [MB] (15 MBps) Copying: 969/1024 [MB] (15 MBps) Copying: 985/1024 [MB] (15 MBps) Copying: 1001/1024 [MB] (16 MBps) Copying: 1019/1024 [MB] (17 MBps) Copying: 1024/1024 [MB] (average 16 MBps) 00:24:16.682 00:24:16.682 13:20:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:24:16.682 13:20:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:24:16.682 13:20:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:16.941 [2024-07-25 13:20:09.112259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.941 [2024-07-25 13:20:09.112323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:16.941 [2024-07-25 13:20:09.112365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:16.941 [2024-07-25 13:20:09.112379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.941 [2024-07-25 13:20:09.112427] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:16.941 [2024-07-25 13:20:09.115814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.941 [2024-07-25 13:20:09.115859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:16.941 [2024-07-25 13:20:09.115884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.361 ms 00:24:16.941 [2024-07-25 13:20:09.115899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.941 [2024-07-25 13:20:09.117523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.941 [2024-07-25 13:20:09.117578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:16.941 [2024-07-25 13:20:09.117597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.587 ms 00:24:16.941 [2024-07-25 13:20:09.117616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.200 [2024-07-25 13:20:09.133990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.200 [2024-07-25 13:20:09.134068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:17.200 [2024-07-25 13:20:09.134090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.342 ms 00:24:17.200 [2024-07-25 13:20:09.134118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.200 [2024-07-25 13:20:09.140911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.200 [2024-07-25 13:20:09.140979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:17.200 [2024-07-25 13:20:09.141000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.726 ms 00:24:17.201 [2024-07-25 13:20:09.141015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.201 [2024-07-25 13:20:09.172535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.201 [2024-07-25 13:20:09.172604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:17.201 [2024-07-25 13:20:09.172626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.368 ms 00:24:17.201 [2024-07-25 13:20:09.172641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.201 [2024-07-25 13:20:09.191925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.201 [2024-07-25 13:20:09.192036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:17.201 [2024-07-25 13:20:09.192059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.216 ms 00:24:17.201 [2024-07-25 13:20:09.192090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.201 [2024-07-25 13:20:09.192382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.201 [2024-07-25 13:20:09.192413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:17.201 [2024-07-25 13:20:09.192429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.182 ms 00:24:17.201 [2024-07-25 13:20:09.192444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.201 [2024-07-25 13:20:09.225996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.201 [2024-07-25 13:20:09.226099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:17.201 [2024-07-25 13:20:09.226134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.517 ms 00:24:17.201 [2024-07-25 13:20:09.226150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.201 [2024-07-25 13:20:09.258543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.201 [2024-07-25 13:20:09.258642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:17.201 [2024-07-25 13:20:09.258665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.293 ms 00:24:17.201 [2024-07-25 13:20:09.258680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.201 [2024-07-25 13:20:09.290521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.201 [2024-07-25 13:20:09.290628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:17.201 [2024-07-25 13:20:09.290650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.743 ms 00:24:17.201 [2024-07-25 13:20:09.290665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.201 [2024-07-25 13:20:09.322612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.201 [2024-07-25 13:20:09.322695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:17.201 [2024-07-25 13:20:09.322718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.739 ms 00:24:17.201 [2024-07-25 13:20:09.322733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.201 [2024-07-25 13:20:09.322809] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:17.201 [2024-07-25 13:20:09.322840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.322856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.322872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.322884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.322899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.322911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.322926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.322938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.322956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.322969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.322983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.322996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:17.201 [2024-07-25 13:20:09.323720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.323735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.323761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.323783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.323804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.323820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.323845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.323867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.323884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.323897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.323911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.323924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.323938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.323950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.323969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.323982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.323996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.324009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.324023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.324035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.324050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.324062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.324076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.324096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.324125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.324141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.324155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.324168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.324182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.324194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.324211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.324223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.324238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.324251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.324265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.324278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.324293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.324306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.324320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.324332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.324347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.324359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:17.202 [2024-07-25 13:20:09.324385] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:17.202 [2024-07-25 13:20:09.324398] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 766e89cb-ea00-4368-85a7-fe9fb3737ad0 00:24:17.202 [2024-07-25 13:20:09.324429] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:17.202 [2024-07-25 13:20:09.324455] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:17.202 [2024-07-25 13:20:09.324475] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:17.202 [2024-07-25 13:20:09.324488] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:17.202 [2024-07-25 13:20:09.324502] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:17.202 [2024-07-25 13:20:09.324514] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:17.202 [2024-07-25 13:20:09.324528] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:17.202 [2024-07-25 13:20:09.324539] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:17.202 [2024-07-25 13:20:09.324551] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:17.202 [2024-07-25 13:20:09.324564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.202 [2024-07-25 13:20:09.324578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:17.202 [2024-07-25 13:20:09.324592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.758 ms 00:24:17.202 [2024-07-25 13:20:09.324606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.202 [2024-07-25 13:20:09.341798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.202 [2024-07-25 13:20:09.341887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:17.202 [2024-07-25 13:20:09.341910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.076 ms 00:24:17.202 [2024-07-25 13:20:09.341926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.202 [2024-07-25 13:20:09.342408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.202 [2024-07-25 13:20:09.342448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:17.202 [2024-07-25 13:20:09.342465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.424 ms 00:24:17.202 [2024-07-25 13:20:09.342480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.461 [2024-07-25 13:20:09.396181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:17.461 [2024-07-25 13:20:09.396276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:17.461 [2024-07-25 13:20:09.396299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:17.461 [2024-07-25 13:20:09.396315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.461 [2024-07-25 13:20:09.396419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:17.461 [2024-07-25 13:20:09.396440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:17.461 [2024-07-25 13:20:09.396454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:17.461 [2024-07-25 13:20:09.396481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.461 [2024-07-25 13:20:09.396635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:17.461 [2024-07-25 13:20:09.396662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:17.461 [2024-07-25 13:20:09.396677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:17.461 [2024-07-25 13:20:09.396691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.461 [2024-07-25 13:20:09.396718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:17.461 [2024-07-25 13:20:09.396739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:17.461 [2024-07-25 13:20:09.396755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:17.461 [2024-07-25 13:20:09.396769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.461 [2024-07-25 13:20:09.496705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:17.461 [2024-07-25 13:20:09.496784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:17.461 [2024-07-25 13:20:09.496805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:17.461 [2024-07-25 13:20:09.496820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.461 [2024-07-25 13:20:09.582417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:17.461 [2024-07-25 13:20:09.582503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:17.461 [2024-07-25 13:20:09.582525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:17.461 [2024-07-25 13:20:09.582541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.461 [2024-07-25 13:20:09.582718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:17.461 [2024-07-25 13:20:09.582747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:17.462 [2024-07-25 13:20:09.582761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:17.462 [2024-07-25 13:20:09.582776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.462 [2024-07-25 13:20:09.582847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:17.462 [2024-07-25 13:20:09.582882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:17.462 [2024-07-25 13:20:09.582895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:17.462 [2024-07-25 13:20:09.582909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.462 [2024-07-25 13:20:09.583034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:17.462 [2024-07-25 13:20:09.583071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:17.462 [2024-07-25 13:20:09.583089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:17.462 [2024-07-25 13:20:09.583121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.462 [2024-07-25 13:20:09.583182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:17.462 [2024-07-25 13:20:09.583217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:17.462 [2024-07-25 13:20:09.583233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:17.462 [2024-07-25 13:20:09.583247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.462 [2024-07-25 13:20:09.583297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:17.462 [2024-07-25 13:20:09.583317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:17.462 [2024-07-25 13:20:09.583332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:17.462 [2024-07-25 13:20:09.583346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.462 [2024-07-25 13:20:09.583403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:17.462 [2024-07-25 13:20:09.583427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:17.462 [2024-07-25 13:20:09.583440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:17.462 [2024-07-25 13:20:09.583453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.462 [2024-07-25 13:20:09.583638] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 471.344 ms, result 0 00:24:17.462 true 00:24:17.462 13:20:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 82045 00:24:17.462 13:20:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid82045 00:24:17.462 13:20:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:24:17.720 [2024-07-25 13:20:09.712653] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:17.720 [2024-07-25 13:20:09.712837] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82908 ] 00:24:17.720 [2024-07-25 13:20:09.885669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.978 [2024-07-25 13:20:10.128728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.840  Copying: 166/1024 [MB] (166 MBps) Copying: 338/1024 [MB] (171 MBps) Copying: 508/1024 [MB] (169 MBps) Copying: 677/1024 [MB] (169 MBps) Copying: 846/1024 [MB] (169 MBps) Copying: 1014/1024 [MB] (167 MBps) Copying: 1024/1024 [MB] (average 169 MBps) 00:24:25.840 00:24:25.840 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 82045 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:24:25.840 13:20:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:25.840 [2024-07-25 13:20:17.749193] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:25.840 [2024-07-25 13:20:17.749360] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82989 ] 00:24:25.840 [2024-07-25 13:20:17.920970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.098 [2024-07-25 13:20:18.109735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.356 [2024-07-25 13:20:18.422097] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:26.356 [2024-07-25 13:20:18.422196] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:26.356 [2024-07-25 13:20:18.489276] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:24:26.356 [2024-07-25 13:20:18.489727] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:24:26.356 [2024-07-25 13:20:18.489962] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:24:26.615 [2024-07-25 13:20:18.715544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.615 [2024-07-25 13:20:18.715610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:26.615 [2024-07-25 13:20:18.715631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:26.615 [2024-07-25 13:20:18.715644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.615 [2024-07-25 13:20:18.715718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.615 [2024-07-25 13:20:18.715740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:26.615 [2024-07-25 13:20:18.715754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:24:26.616 [2024-07-25 13:20:18.715766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.616 [2024-07-25 13:20:18.715798] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:26.616 [2024-07-25 13:20:18.716745] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:26.616 [2024-07-25 13:20:18.716779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.616 [2024-07-25 13:20:18.716793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:26.616 [2024-07-25 13:20:18.716807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.988 ms 00:24:26.616 [2024-07-25 13:20:18.716818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.616 [2024-07-25 13:20:18.718021] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:26.616 [2024-07-25 13:20:18.734274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.616 [2024-07-25 13:20:18.734326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:26.616 [2024-07-25 13:20:18.734354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.253 ms 00:24:26.616 [2024-07-25 13:20:18.734367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.616 [2024-07-25 13:20:18.734451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.616 [2024-07-25 13:20:18.734472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:26.616 [2024-07-25 13:20:18.734485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:26.616 [2024-07-25 13:20:18.734498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.616 [2024-07-25 13:20:18.739180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.616 [2024-07-25 13:20:18.739230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:26.616 [2024-07-25 13:20:18.739249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.573 ms 00:24:26.616 [2024-07-25 13:20:18.739260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.616 [2024-07-25 13:20:18.739384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.616 [2024-07-25 13:20:18.739407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:26.616 [2024-07-25 13:20:18.739421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:24:26.616 [2024-07-25 13:20:18.739433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.616 [2024-07-25 13:20:18.739507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.616 [2024-07-25 13:20:18.739526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:26.616 [2024-07-25 13:20:18.739544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:24:26.616 [2024-07-25 13:20:18.739556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.616 [2024-07-25 13:20:18.739591] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:26.616 [2024-07-25 13:20:18.743910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.616 [2024-07-25 13:20:18.743948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:26.616 [2024-07-25 13:20:18.743964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.328 ms 00:24:26.616 [2024-07-25 13:20:18.743976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.616 [2024-07-25 13:20:18.744022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.616 [2024-07-25 13:20:18.744039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:26.616 [2024-07-25 13:20:18.744052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:26.616 [2024-07-25 13:20:18.744064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.616 [2024-07-25 13:20:18.744132] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:26.616 [2024-07-25 13:20:18.744167] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:26.616 [2024-07-25 13:20:18.744215] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:26.616 [2024-07-25 13:20:18.744235] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:24:26.616 [2024-07-25 13:20:18.744343] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:26.616 [2024-07-25 13:20:18.744359] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:26.616 [2024-07-25 13:20:18.744374] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:24:26.616 [2024-07-25 13:20:18.744389] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:26.616 [2024-07-25 13:20:18.744403] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:26.616 [2024-07-25 13:20:18.744421] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:26.616 [2024-07-25 13:20:18.744433] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:26.616 [2024-07-25 13:20:18.744444] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:26.616 [2024-07-25 13:20:18.744455] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:26.616 [2024-07-25 13:20:18.744468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.616 [2024-07-25 13:20:18.744479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:26.616 [2024-07-25 13:20:18.744492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:24:26.616 [2024-07-25 13:20:18.744504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.616 [2024-07-25 13:20:18.744600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.616 [2024-07-25 13:20:18.744616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:26.616 [2024-07-25 13:20:18.744633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:26.616 [2024-07-25 13:20:18.744645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.616 [2024-07-25 13:20:18.744781] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:26.616 [2024-07-25 13:20:18.744809] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:26.616 [2024-07-25 13:20:18.744823] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:26.616 [2024-07-25 13:20:18.744835] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:26.616 [2024-07-25 13:20:18.744848] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:26.616 [2024-07-25 13:20:18.744860] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:26.616 [2024-07-25 13:20:18.744872] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:26.616 [2024-07-25 13:20:18.744883] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:26.616 [2024-07-25 13:20:18.744894] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:26.616 [2024-07-25 13:20:18.744905] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:26.616 [2024-07-25 13:20:18.744916] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:26.616 [2024-07-25 13:20:18.744927] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:26.616 [2024-07-25 13:20:18.744939] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:26.616 [2024-07-25 13:20:18.744950] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:26.616 [2024-07-25 13:20:18.744978] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:26.616 [2024-07-25 13:20:18.744990] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:26.616 [2024-07-25 13:20:18.745017] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:26.616 [2024-07-25 13:20:18.745029] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:26.616 [2024-07-25 13:20:18.745041] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:26.616 [2024-07-25 13:20:18.745052] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:26.616 [2024-07-25 13:20:18.745063] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:26.616 [2024-07-25 13:20:18.745075] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:26.616 [2024-07-25 13:20:18.745086] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:26.616 [2024-07-25 13:20:18.745097] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:26.616 [2024-07-25 13:20:18.745123] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:26.616 [2024-07-25 13:20:18.745136] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:26.616 [2024-07-25 13:20:18.745148] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:26.616 [2024-07-25 13:20:18.745159] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:26.616 [2024-07-25 13:20:18.745170] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:26.616 [2024-07-25 13:20:18.745181] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:26.616 [2024-07-25 13:20:18.745192] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:26.616 [2024-07-25 13:20:18.745204] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:26.616 [2024-07-25 13:20:18.745215] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:26.616 [2024-07-25 13:20:18.745227] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:26.616 [2024-07-25 13:20:18.745238] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:26.616 [2024-07-25 13:20:18.745249] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:26.616 [2024-07-25 13:20:18.745260] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:26.616 [2024-07-25 13:20:18.745273] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:26.616 [2024-07-25 13:20:18.745285] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:26.616 [2024-07-25 13:20:18.745296] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:26.616 [2024-07-25 13:20:18.745307] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:26.616 [2024-07-25 13:20:18.745318] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:26.616 [2024-07-25 13:20:18.745329] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:26.616 [2024-07-25 13:20:18.745340] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:26.616 [2024-07-25 13:20:18.745352] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:26.616 [2024-07-25 13:20:18.745363] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:26.617 [2024-07-25 13:20:18.745375] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:26.617 [2024-07-25 13:20:18.745392] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:26.617 [2024-07-25 13:20:18.745404] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:26.617 [2024-07-25 13:20:18.745416] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:26.617 [2024-07-25 13:20:18.745427] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:26.617 [2024-07-25 13:20:18.745437] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:26.617 [2024-07-25 13:20:18.745449] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:26.617 [2024-07-25 13:20:18.745462] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:26.617 [2024-07-25 13:20:18.745477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:26.617 [2024-07-25 13:20:18.745490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:26.617 [2024-07-25 13:20:18.745503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:26.617 [2024-07-25 13:20:18.745515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:26.617 [2024-07-25 13:20:18.745527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:26.617 [2024-07-25 13:20:18.745539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:26.617 [2024-07-25 13:20:18.745552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:26.617 [2024-07-25 13:20:18.745564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:26.617 [2024-07-25 13:20:18.745576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:26.617 [2024-07-25 13:20:18.745592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:26.617 [2024-07-25 13:20:18.745605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:26.617 [2024-07-25 13:20:18.745617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:26.617 [2024-07-25 13:20:18.745629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:26.617 [2024-07-25 13:20:18.745642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:26.617 [2024-07-25 13:20:18.745655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:26.617 [2024-07-25 13:20:18.745669] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:26.617 [2024-07-25 13:20:18.745682] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:26.617 [2024-07-25 13:20:18.745695] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:26.617 [2024-07-25 13:20:18.745707] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:26.617 [2024-07-25 13:20:18.745720] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:26.617 [2024-07-25 13:20:18.745732] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:26.617 [2024-07-25 13:20:18.745746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.617 [2024-07-25 13:20:18.745759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:26.617 [2024-07-25 13:20:18.745771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.031 ms 00:24:26.617 [2024-07-25 13:20:18.745782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.617 [2024-07-25 13:20:18.802210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.617 [2024-07-25 13:20:18.802286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:26.617 [2024-07-25 13:20:18.802311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.359 ms 00:24:26.617 [2024-07-25 13:20:18.802326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.617 [2024-07-25 13:20:18.802472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.617 [2024-07-25 13:20:18.802492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:26.617 [2024-07-25 13:20:18.802516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:24:26.617 [2024-07-25 13:20:18.802530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.876 [2024-07-25 13:20:18.849963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.876 [2024-07-25 13:20:18.850031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:26.876 [2024-07-25 13:20:18.850054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.315 ms 00:24:26.876 [2024-07-25 13:20:18.850070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.876 [2024-07-25 13:20:18.850183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.876 [2024-07-25 13:20:18.850208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:26.876 [2024-07-25 13:20:18.850226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:26.876 [2024-07-25 13:20:18.850240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.876 [2024-07-25 13:20:18.850723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.876 [2024-07-25 13:20:18.850756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:26.876 [2024-07-25 13:20:18.850773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:24:26.876 [2024-07-25 13:20:18.850787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.876 [2024-07-25 13:20:18.850978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.876 [2024-07-25 13:20:18.851009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:26.876 [2024-07-25 13:20:18.851024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:24:26.876 [2024-07-25 13:20:18.851038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.876 [2024-07-25 13:20:18.870931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.876 [2024-07-25 13:20:18.870991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:26.876 [2024-07-25 13:20:18.871014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.859 ms 00:24:26.876 [2024-07-25 13:20:18.871029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.876 [2024-07-25 13:20:18.891216] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:26.876 [2024-07-25 13:20:18.891265] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:26.876 [2024-07-25 13:20:18.891287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.876 [2024-07-25 13:20:18.891302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:26.876 [2024-07-25 13:20:18.891318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.063 ms 00:24:26.876 [2024-07-25 13:20:18.891335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.876 [2024-07-25 13:20:18.928327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.876 [2024-07-25 13:20:18.928407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:26.876 [2024-07-25 13:20:18.928431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.921 ms 00:24:26.876 [2024-07-25 13:20:18.928446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.876 [2024-07-25 13:20:18.949067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.876 [2024-07-25 13:20:18.949137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:26.876 [2024-07-25 13:20:18.949159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.536 ms 00:24:26.876 [2024-07-25 13:20:18.949174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.876 [2024-07-25 13:20:18.968092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.876 [2024-07-25 13:20:18.968149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:26.876 [2024-07-25 13:20:18.968170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.845 ms 00:24:26.876 [2024-07-25 13:20:18.968184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.876 [2024-07-25 13:20:18.969186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.876 [2024-07-25 13:20:18.969225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:26.876 [2024-07-25 13:20:18.969246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.854 ms 00:24:26.876 [2024-07-25 13:20:18.969260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.876 [2024-07-25 13:20:19.056470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.876 [2024-07-25 13:20:19.056546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:26.876 [2024-07-25 13:20:19.056572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.177 ms 00:24:26.876 [2024-07-25 13:20:19.056587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.134 [2024-07-25 13:20:19.072001] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:27.134 [2024-07-25 13:20:19.075034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.134 [2024-07-25 13:20:19.075075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:27.134 [2024-07-25 13:20:19.075096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.365 ms 00:24:27.134 [2024-07-25 13:20:19.075125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.134 [2024-07-25 13:20:19.075277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.134 [2024-07-25 13:20:19.075306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:27.134 [2024-07-25 13:20:19.075334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:27.134 [2024-07-25 13:20:19.075353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.134 [2024-07-25 13:20:19.075469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.134 [2024-07-25 13:20:19.075492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:27.134 [2024-07-25 13:20:19.075508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:24:27.134 [2024-07-25 13:20:19.075523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.134 [2024-07-25 13:20:19.075562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.134 [2024-07-25 13:20:19.075581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:27.134 [2024-07-25 13:20:19.075604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:27.134 [2024-07-25 13:20:19.075619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.134 [2024-07-25 13:20:19.075665] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:27.134 [2024-07-25 13:20:19.075699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.134 [2024-07-25 13:20:19.075713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:27.134 [2024-07-25 13:20:19.075728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:24:27.134 [2024-07-25 13:20:19.075743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.134 [2024-07-25 13:20:19.113641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.134 [2024-07-25 13:20:19.113699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:27.134 [2024-07-25 13:20:19.113722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.862 ms 00:24:27.134 [2024-07-25 13:20:19.113737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.134 [2024-07-25 13:20:19.113840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.134 [2024-07-25 13:20:19.113863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:27.134 [2024-07-25 13:20:19.113879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:24:27.134 [2024-07-25 13:20:19.113893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.134 [2024-07-25 13:20:19.115233] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 399.059 ms, result 0 00:25:04.281  Copying: 29/1024 [MB] (29 MBps) Copying: 59/1024 [MB] (30 MBps) Copying: 88/1024 [MB] (28 MBps) Copying: 114/1024 [MB] (26 MBps) Copying: 142/1024 [MB] (28 MBps) Copying: 172/1024 [MB] (29 MBps) Copying: 201/1024 [MB] (29 MBps) Copying: 231/1024 [MB] (29 MBps) Copying: 260/1024 [MB] (29 MBps) Copying: 289/1024 [MB] (28 MBps) Copying: 318/1024 [MB] (29 MBps) Copying: 344/1024 [MB] (25 MBps) Copying: 372/1024 [MB] (28 MBps) Copying: 400/1024 [MB] (28 MBps) Copying: 426/1024 [MB] (26 MBps) Copying: 453/1024 [MB] (27 MBps) Copying: 480/1024 [MB] (26 MBps) Copying: 507/1024 [MB] (27 MBps) Copying: 534/1024 [MB] (27 MBps) Copying: 561/1024 [MB] (26 MBps) Copying: 590/1024 [MB] (29 MBps) Copying: 620/1024 [MB] (29 MBps) Copying: 650/1024 [MB] (29 MBps) Copying: 679/1024 [MB] (29 MBps) Copying: 709/1024 [MB] (29 MBps) Copying: 738/1024 [MB] (29 MBps) Copying: 768/1024 [MB] (29 MBps) Copying: 797/1024 [MB] (29 MBps) Copying: 827/1024 [MB] (29 MBps) Copying: 856/1024 [MB] (29 MBps) Copying: 884/1024 [MB] (27 MBps) Copying: 911/1024 [MB] (27 MBps) Copying: 940/1024 [MB] (28 MBps) Copying: 967/1024 [MB] (27 MBps) Copying: 995/1024 [MB] (28 MBps) Copying: 1023/1024 [MB] (27 MBps) Copying: 1048428/1048576 [kB] (808 kBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-07-25 13:20:56.337428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.281 [2024-07-25 13:20:56.337506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:04.281 [2024-07-25 13:20:56.337529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:04.281 [2024-07-25 13:20:56.337541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.281 [2024-07-25 13:20:56.340797] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:04.281 [2024-07-25 13:20:56.345299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.281 [2024-07-25 13:20:56.345375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:04.281 [2024-07-25 13:20:56.345399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.441 ms 00:25:04.281 [2024-07-25 13:20:56.345412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.281 [2024-07-25 13:20:56.366932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.281 [2024-07-25 13:20:56.367073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:04.281 [2024-07-25 13:20:56.367119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.252 ms 00:25:04.281 [2024-07-25 13:20:56.367139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.281 [2024-07-25 13:20:56.392286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.281 [2024-07-25 13:20:56.392381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:04.281 [2024-07-25 13:20:56.392408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.112 ms 00:25:04.281 [2024-07-25 13:20:56.392424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.281 [2024-07-25 13:20:56.401548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.281 [2024-07-25 13:20:56.401653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:04.281 [2024-07-25 13:20:56.401708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.054 ms 00:25:04.281 [2024-07-25 13:20:56.401735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.281 [2024-07-25 13:20:56.444118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.282 [2024-07-25 13:20:56.444203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:04.282 [2024-07-25 13:20:56.444228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.197 ms 00:25:04.282 [2024-07-25 13:20:56.444243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.282 [2024-07-25 13:20:56.465649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.282 [2024-07-25 13:20:56.465744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:04.282 [2024-07-25 13:20:56.465771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.316 ms 00:25:04.282 [2024-07-25 13:20:56.465787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.541 [2024-07-25 13:20:56.558358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.541 [2024-07-25 13:20:56.558466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:04.541 [2024-07-25 13:20:56.558492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.484 ms 00:25:04.541 [2024-07-25 13:20:56.558536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.541 [2024-07-25 13:20:56.597207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.541 [2024-07-25 13:20:56.597266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:04.541 [2024-07-25 13:20:56.597290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.637 ms 00:25:04.541 [2024-07-25 13:20:56.597305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.541 [2024-07-25 13:20:56.635884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.541 [2024-07-25 13:20:56.635969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:04.541 [2024-07-25 13:20:56.635994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.491 ms 00:25:04.541 [2024-07-25 13:20:56.636010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.541 [2024-07-25 13:20:56.670367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.541 [2024-07-25 13:20:56.670429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:04.541 [2024-07-25 13:20:56.670460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.267 ms 00:25:04.541 [2024-07-25 13:20:56.670472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.541 [2024-07-25 13:20:56.701658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.541 [2024-07-25 13:20:56.701723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:04.541 [2024-07-25 13:20:56.701744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.072 ms 00:25:04.541 [2024-07-25 13:20:56.701756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.541 [2024-07-25 13:20:56.701820] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:04.541 [2024-07-25 13:20:56.701846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 130816 / 261120 wr_cnt: 1 state: open 00:25:04.541 [2024-07-25 13:20:56.701861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.701874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.701886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.701898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.701911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.701923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.701935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.701947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.701959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.701972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.701984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.701996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:04.541 [2024-07-25 13:20:56.702706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:04.542 [2024-07-25 13:20:56.702718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:04.542 [2024-07-25 13:20:56.702732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:04.542 [2024-07-25 13:20:56.702744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:04.542 [2024-07-25 13:20:56.702756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:04.542 [2024-07-25 13:20:56.702768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:04.542 [2024-07-25 13:20:56.702780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:04.542 [2024-07-25 13:20:56.702792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:04.542 [2024-07-25 13:20:56.702804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:04.542 [2024-07-25 13:20:56.702817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:04.542 [2024-07-25 13:20:56.702829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:04.542 [2024-07-25 13:20:56.702841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:04.542 [2024-07-25 13:20:56.702853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:04.542 [2024-07-25 13:20:56.702865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:04.542 [2024-07-25 13:20:56.702877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:04.542 [2024-07-25 13:20:56.702889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:04.542 [2024-07-25 13:20:56.702902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:04.542 [2024-07-25 13:20:56.702914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:04.542 [2024-07-25 13:20:56.702927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:04.542 [2024-07-25 13:20:56.702939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:04.542 [2024-07-25 13:20:56.702951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:04.542 [2024-07-25 13:20:56.702963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:04.542 [2024-07-25 13:20:56.702976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:04.542 [2024-07-25 13:20:56.702988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:04.542 [2024-07-25 13:20:56.703000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:04.542 [2024-07-25 13:20:56.703012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:04.542 [2024-07-25 13:20:56.703025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:04.542 [2024-07-25 13:20:56.703036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:04.542 [2024-07-25 13:20:56.703049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:04.542 [2024-07-25 13:20:56.703060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:04.542 [2024-07-25 13:20:56.703073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:04.542 [2024-07-25 13:20:56.703084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:04.542 [2024-07-25 13:20:56.703097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:04.542 [2024-07-25 13:20:56.703130] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:04.542 [2024-07-25 13:20:56.703145] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 766e89cb-ea00-4368-85a7-fe9fb3737ad0 00:25:04.542 [2024-07-25 13:20:56.703166] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 130816 00:25:04.542 [2024-07-25 13:20:56.703177] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 131776 00:25:04.542 [2024-07-25 13:20:56.703192] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 130816 00:25:04.542 [2024-07-25 13:20:56.703205] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0073 00:25:04.542 [2024-07-25 13:20:56.703216] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:04.542 [2024-07-25 13:20:56.703228] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:04.542 [2024-07-25 13:20:56.703239] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:04.542 [2024-07-25 13:20:56.703250] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:04.542 [2024-07-25 13:20:56.703260] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:04.542 [2024-07-25 13:20:56.703272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.542 [2024-07-25 13:20:56.703284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:04.542 [2024-07-25 13:20:56.703321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.455 ms 00:25:04.542 [2024-07-25 13:20:56.703333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.542 [2024-07-25 13:20:56.719858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.542 [2024-07-25 13:20:56.719909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:04.542 [2024-07-25 13:20:56.719927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.467 ms 00:25:04.542 [2024-07-25 13:20:56.719939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.542 [2024-07-25 13:20:56.720421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.542 [2024-07-25 13:20:56.720452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:04.542 [2024-07-25 13:20:56.720469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:25:04.542 [2024-07-25 13:20:56.720481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.801 [2024-07-25 13:20:56.758250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.802 [2024-07-25 13:20:56.758311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:04.802 [2024-07-25 13:20:56.758331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.802 [2024-07-25 13:20:56.758343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.802 [2024-07-25 13:20:56.758435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.802 [2024-07-25 13:20:56.758451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:04.802 [2024-07-25 13:20:56.758463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.802 [2024-07-25 13:20:56.758475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.802 [2024-07-25 13:20:56.758574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.802 [2024-07-25 13:20:56.758593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:04.802 [2024-07-25 13:20:56.758607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.802 [2024-07-25 13:20:56.758619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.802 [2024-07-25 13:20:56.758642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.802 [2024-07-25 13:20:56.758655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:04.802 [2024-07-25 13:20:56.758667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.802 [2024-07-25 13:20:56.758679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.802 [2024-07-25 13:20:56.857945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.802 [2024-07-25 13:20:56.858016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:04.802 [2024-07-25 13:20:56.858036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.802 [2024-07-25 13:20:56.858049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.802 [2024-07-25 13:20:56.943501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.802 [2024-07-25 13:20:56.943591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:04.802 [2024-07-25 13:20:56.943614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.802 [2024-07-25 13:20:56.943629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.802 [2024-07-25 13:20:56.943747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.802 [2024-07-25 13:20:56.943777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:04.802 [2024-07-25 13:20:56.943790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.802 [2024-07-25 13:20:56.943802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.802 [2024-07-25 13:20:56.943852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.802 [2024-07-25 13:20:56.943869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:04.802 [2024-07-25 13:20:56.943882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.802 [2024-07-25 13:20:56.943893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.802 [2024-07-25 13:20:56.944017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.802 [2024-07-25 13:20:56.944041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:04.802 [2024-07-25 13:20:56.944055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.802 [2024-07-25 13:20:56.944066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.802 [2024-07-25 13:20:56.944150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.802 [2024-07-25 13:20:56.944169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:04.802 [2024-07-25 13:20:56.944182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.802 [2024-07-25 13:20:56.944194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.802 [2024-07-25 13:20:56.944241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.802 [2024-07-25 13:20:56.944257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:04.802 [2024-07-25 13:20:56.944276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.802 [2024-07-25 13:20:56.944287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.802 [2024-07-25 13:20:56.944340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.802 [2024-07-25 13:20:56.944357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:04.802 [2024-07-25 13:20:56.944369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.802 [2024-07-25 13:20:56.944381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.802 [2024-07-25 13:20:56.944531] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 610.549 ms, result 0 00:25:06.704 00:25:06.704 00:25:06.704 13:20:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:08.605 13:21:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:08.605 [2024-07-25 13:21:00.751938] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:08.605 [2024-07-25 13:21:00.752581] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83409 ] 00:25:08.865 [2024-07-25 13:21:00.916333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.123 [2024-07-25 13:21:01.148932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.381 [2024-07-25 13:21:01.474401] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:09.381 [2024-07-25 13:21:01.474500] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:09.640 [2024-07-25 13:21:01.636392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.640 [2024-07-25 13:21:01.636457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:09.640 [2024-07-25 13:21:01.636478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:09.640 [2024-07-25 13:21:01.636491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.640 [2024-07-25 13:21:01.636565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.640 [2024-07-25 13:21:01.636585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:09.640 [2024-07-25 13:21:01.636598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:25:09.640 [2024-07-25 13:21:01.636613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.640 [2024-07-25 13:21:01.636649] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:09.640 [2024-07-25 13:21:01.637630] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:09.641 [2024-07-25 13:21:01.637667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.641 [2024-07-25 13:21:01.637690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:09.641 [2024-07-25 13:21:01.637709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.028 ms 00:25:09.641 [2024-07-25 13:21:01.637721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.641 [2024-07-25 13:21:01.638985] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:09.641 [2024-07-25 13:21:01.655283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.641 [2024-07-25 13:21:01.655326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:09.641 [2024-07-25 13:21:01.655345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.310 ms 00:25:09.641 [2024-07-25 13:21:01.655356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.641 [2024-07-25 13:21:01.655433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.641 [2024-07-25 13:21:01.655457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:09.641 [2024-07-25 13:21:01.655470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:25:09.641 [2024-07-25 13:21:01.655481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.641 [2024-07-25 13:21:01.660031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.641 [2024-07-25 13:21:01.660087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:09.641 [2024-07-25 13:21:01.660118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.451 ms 00:25:09.641 [2024-07-25 13:21:01.660133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.641 [2024-07-25 13:21:01.660267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.641 [2024-07-25 13:21:01.660294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:09.641 [2024-07-25 13:21:01.660308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:25:09.641 [2024-07-25 13:21:01.660319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.641 [2024-07-25 13:21:01.660396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.641 [2024-07-25 13:21:01.660415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:09.641 [2024-07-25 13:21:01.660428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:25:09.641 [2024-07-25 13:21:01.660438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.641 [2024-07-25 13:21:01.660487] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:09.641 [2024-07-25 13:21:01.664834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.641 [2024-07-25 13:21:01.664890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:09.641 [2024-07-25 13:21:01.664906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.370 ms 00:25:09.641 [2024-07-25 13:21:01.664918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.641 [2024-07-25 13:21:01.664982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.641 [2024-07-25 13:21:01.665001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:09.641 [2024-07-25 13:21:01.665014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:09.641 [2024-07-25 13:21:01.665025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.641 [2024-07-25 13:21:01.665089] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:09.641 [2024-07-25 13:21:01.665162] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:09.641 [2024-07-25 13:21:01.665224] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:09.641 [2024-07-25 13:21:01.665256] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:25:09.641 [2024-07-25 13:21:01.665363] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:09.641 [2024-07-25 13:21:01.665379] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:09.641 [2024-07-25 13:21:01.665394] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:25:09.641 [2024-07-25 13:21:01.665409] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:09.641 [2024-07-25 13:21:01.665422] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:09.641 [2024-07-25 13:21:01.665434] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:09.641 [2024-07-25 13:21:01.665446] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:09.641 [2024-07-25 13:21:01.665457] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:09.641 [2024-07-25 13:21:01.665467] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:09.641 [2024-07-25 13:21:01.665480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.641 [2024-07-25 13:21:01.665495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:09.641 [2024-07-25 13:21:01.665508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:25:09.641 [2024-07-25 13:21:01.665519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.641 [2024-07-25 13:21:01.665619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.641 [2024-07-25 13:21:01.665634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:09.641 [2024-07-25 13:21:01.665646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:25:09.641 [2024-07-25 13:21:01.665657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.641 [2024-07-25 13:21:01.665792] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:09.641 [2024-07-25 13:21:01.665812] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:09.641 [2024-07-25 13:21:01.665830] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:09.641 [2024-07-25 13:21:01.665841] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.641 [2024-07-25 13:21:01.665853] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:09.641 [2024-07-25 13:21:01.665864] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:09.641 [2024-07-25 13:21:01.665874] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:09.641 [2024-07-25 13:21:01.665886] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:09.641 [2024-07-25 13:21:01.665896] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:09.641 [2024-07-25 13:21:01.665906] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:09.641 [2024-07-25 13:21:01.665917] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:09.641 [2024-07-25 13:21:01.665927] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:09.641 [2024-07-25 13:21:01.665937] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:09.641 [2024-07-25 13:21:01.665948] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:09.641 [2024-07-25 13:21:01.665958] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:09.641 [2024-07-25 13:21:01.665968] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.641 [2024-07-25 13:21:01.665979] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:09.641 [2024-07-25 13:21:01.665989] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:09.641 [2024-07-25 13:21:01.665999] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.641 [2024-07-25 13:21:01.666009] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:09.641 [2024-07-25 13:21:01.666034] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:09.641 [2024-07-25 13:21:01.666046] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:09.641 [2024-07-25 13:21:01.666057] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:09.641 [2024-07-25 13:21:01.666067] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:09.641 [2024-07-25 13:21:01.666077] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:09.641 [2024-07-25 13:21:01.666087] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:09.641 [2024-07-25 13:21:01.666098] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:09.641 [2024-07-25 13:21:01.666125] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:09.641 [2024-07-25 13:21:01.666137] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:09.641 [2024-07-25 13:21:01.666147] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:09.641 [2024-07-25 13:21:01.666157] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:09.641 [2024-07-25 13:21:01.666168] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:09.641 [2024-07-25 13:21:01.666178] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:09.641 [2024-07-25 13:21:01.666188] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:09.641 [2024-07-25 13:21:01.666198] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:09.641 [2024-07-25 13:21:01.666209] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:09.641 [2024-07-25 13:21:01.666220] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:09.641 [2024-07-25 13:21:01.666230] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:09.641 [2024-07-25 13:21:01.666241] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:09.641 [2024-07-25 13:21:01.666251] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.641 [2024-07-25 13:21:01.666261] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:09.642 [2024-07-25 13:21:01.666272] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:09.642 [2024-07-25 13:21:01.666283] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.642 [2024-07-25 13:21:01.666293] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:09.642 [2024-07-25 13:21:01.666305] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:09.642 [2024-07-25 13:21:01.666316] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:09.642 [2024-07-25 13:21:01.666335] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.642 [2024-07-25 13:21:01.666357] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:09.642 [2024-07-25 13:21:01.666368] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:09.642 [2024-07-25 13:21:01.666378] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:09.642 [2024-07-25 13:21:01.666389] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:09.642 [2024-07-25 13:21:01.666399] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:09.642 [2024-07-25 13:21:01.666409] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:09.642 [2024-07-25 13:21:01.666423] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:09.642 [2024-07-25 13:21:01.666438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:09.642 [2024-07-25 13:21:01.666451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:09.642 [2024-07-25 13:21:01.666463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:09.642 [2024-07-25 13:21:01.666475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:09.642 [2024-07-25 13:21:01.666486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:09.642 [2024-07-25 13:21:01.666498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:09.642 [2024-07-25 13:21:01.666509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:09.642 [2024-07-25 13:21:01.666521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:09.642 [2024-07-25 13:21:01.666532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:09.642 [2024-07-25 13:21:01.666544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:09.642 [2024-07-25 13:21:01.666555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:09.642 [2024-07-25 13:21:01.666566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:09.642 [2024-07-25 13:21:01.666578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:09.642 [2024-07-25 13:21:01.666589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:09.642 [2024-07-25 13:21:01.666601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:09.642 [2024-07-25 13:21:01.666612] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:09.642 [2024-07-25 13:21:01.666625] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:09.642 [2024-07-25 13:21:01.666643] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:09.642 [2024-07-25 13:21:01.666655] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:09.642 [2024-07-25 13:21:01.666667] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:09.642 [2024-07-25 13:21:01.666678] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:09.642 [2024-07-25 13:21:01.666691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.642 [2024-07-25 13:21:01.666703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:09.642 [2024-07-25 13:21:01.666714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.965 ms 00:25:09.642 [2024-07-25 13:21:01.666726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.642 [2024-07-25 13:21:01.721166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.642 [2024-07-25 13:21:01.721227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:09.642 [2024-07-25 13:21:01.721252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.375 ms 00:25:09.642 [2024-07-25 13:21:01.721267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.642 [2024-07-25 13:21:01.721420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.642 [2024-07-25 13:21:01.721440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:09.642 [2024-07-25 13:21:01.721456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:25:09.642 [2024-07-25 13:21:01.721470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.642 [2024-07-25 13:21:01.768851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.642 [2024-07-25 13:21:01.768934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:09.642 [2024-07-25 13:21:01.768972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.269 ms 00:25:09.642 [2024-07-25 13:21:01.768991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.642 [2024-07-25 13:21:01.769073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.642 [2024-07-25 13:21:01.769094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:09.642 [2024-07-25 13:21:01.769128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:09.642 [2024-07-25 13:21:01.769153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.642 [2024-07-25 13:21:01.769618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.642 [2024-07-25 13:21:01.769644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:09.642 [2024-07-25 13:21:01.769659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:25:09.642 [2024-07-25 13:21:01.769673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.642 [2024-07-25 13:21:01.769889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.642 [2024-07-25 13:21:01.769915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:09.642 [2024-07-25 13:21:01.769930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:25:09.642 [2024-07-25 13:21:01.769944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.642 [2024-07-25 13:21:01.789480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.642 [2024-07-25 13:21:01.789537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:09.642 [2024-07-25 13:21:01.789560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.492 ms 00:25:09.642 [2024-07-25 13:21:01.789580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.642 [2024-07-25 13:21:01.809550] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:09.642 [2024-07-25 13:21:01.809605] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:09.642 [2024-07-25 13:21:01.809628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.642 [2024-07-25 13:21:01.809644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:09.642 [2024-07-25 13:21:01.809660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.858 ms 00:25:09.642 [2024-07-25 13:21:01.809674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.901 [2024-07-25 13:21:01.846443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.901 [2024-07-25 13:21:01.846520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:09.901 [2024-07-25 13:21:01.846544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.701 ms 00:25:09.901 [2024-07-25 13:21:01.846559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.901 [2024-07-25 13:21:01.865832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.901 [2024-07-25 13:21:01.865888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:09.901 [2024-07-25 13:21:01.865909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.195 ms 00:25:09.901 [2024-07-25 13:21:01.865924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.901 [2024-07-25 13:21:01.884756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.901 [2024-07-25 13:21:01.884805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:09.901 [2024-07-25 13:21:01.884826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.774 ms 00:25:09.901 [2024-07-25 13:21:01.884839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.901 [2024-07-25 13:21:01.885895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.901 [2024-07-25 13:21:01.885935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:09.901 [2024-07-25 13:21:01.885954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.889 ms 00:25:09.901 [2024-07-25 13:21:01.885967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.901 [2024-07-25 13:21:01.973946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.901 [2024-07-25 13:21:01.974018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:09.901 [2024-07-25 13:21:01.974043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.930 ms 00:25:09.901 [2024-07-25 13:21:01.974069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.901 [2024-07-25 13:21:01.989812] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:09.901 [2024-07-25 13:21:01.993289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.901 [2024-07-25 13:21:01.993359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:09.901 [2024-07-25 13:21:01.993396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.107 ms 00:25:09.901 [2024-07-25 13:21:01.993419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.901 [2024-07-25 13:21:01.993597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.901 [2024-07-25 13:21:01.993623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:09.901 [2024-07-25 13:21:01.993640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:09.901 [2024-07-25 13:21:01.993667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.901 [2024-07-25 13:21:01.995602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.901 [2024-07-25 13:21:01.995650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:09.901 [2024-07-25 13:21:01.995669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.853 ms 00:25:09.901 [2024-07-25 13:21:01.995683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.901 [2024-07-25 13:21:01.995733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.901 [2024-07-25 13:21:01.995752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:09.901 [2024-07-25 13:21:01.995767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:09.901 [2024-07-25 13:21:01.995780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.901 [2024-07-25 13:21:01.995858] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:09.901 [2024-07-25 13:21:01.995884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.901 [2024-07-25 13:21:01.995904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:09.901 [2024-07-25 13:21:01.995919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:25:09.901 [2024-07-25 13:21:01.995942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.901 [2024-07-25 13:21:02.034281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.901 [2024-07-25 13:21:02.034353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:09.901 [2024-07-25 13:21:02.034378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.298 ms 00:25:09.901 [2024-07-25 13:21:02.034404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.901 [2024-07-25 13:21:02.034525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.901 [2024-07-25 13:21:02.034549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:09.901 [2024-07-25 13:21:02.034580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:25:09.901 [2024-07-25 13:21:02.034594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.901 [2024-07-25 13:21:02.043511] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 405.220 ms, result 0 00:25:46.026  Copying: 828/1048576 [kB] (828 kBps) Copying: 4196/1048576 [kB] (3368 kBps) Copying: 22/1024 [MB] (18 MBps) Copying: 53/1024 [MB] (30 MBps) Copying: 83/1024 [MB] (29 MBps) Copying: 114/1024 [MB] (31 MBps) Copying: 143/1024 [MB] (28 MBps) Copying: 174/1024 [MB] (31 MBps) Copying: 206/1024 [MB] (31 MBps) Copying: 237/1024 [MB] (31 MBps) Copying: 268/1024 [MB] (30 MBps) Copying: 299/1024 [MB] (30 MBps) Copying: 328/1024 [MB] (29 MBps) Copying: 358/1024 [MB] (30 MBps) Copying: 389/1024 [MB] (30 MBps) Copying: 420/1024 [MB] (30 MBps) Copying: 449/1024 [MB] (28 MBps) Copying: 480/1024 [MB] (31 MBps) Copying: 511/1024 [MB] (31 MBps) Copying: 543/1024 [MB] (31 MBps) Copying: 575/1024 [MB] (31 MBps) Copying: 606/1024 [MB] (31 MBps) Copying: 635/1024 [MB] (28 MBps) Copying: 667/1024 [MB] (31 MBps) Copying: 698/1024 [MB] (31 MBps) Copying: 728/1024 [MB] (29 MBps) Copying: 758/1024 [MB] (30 MBps) Copying: 789/1024 [MB] (30 MBps) Copying: 820/1024 [MB] (31 MBps) Copying: 850/1024 [MB] (30 MBps) Copying: 880/1024 [MB] (29 MBps) Copying: 911/1024 [MB] (30 MBps) Copying: 941/1024 [MB] (30 MBps) Copying: 971/1024 [MB] (29 MBps) Copying: 1000/1024 [MB] (29 MBps) Copying: 1024/1024 [MB] (average 28 MBps)[2024-07-25 13:21:38.195015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.026 [2024-07-25 13:21:38.195100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:46.026 [2024-07-25 13:21:38.195154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:46.026 [2024-07-25 13:21:38.195181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.026 [2024-07-25 13:21:38.195217] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:46.026 [2024-07-25 13:21:38.199172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.026 [2024-07-25 13:21:38.199215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:46.026 [2024-07-25 13:21:38.199234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.928 ms 00:25:46.026 [2024-07-25 13:21:38.199248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.026 [2024-07-25 13:21:38.199492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.026 [2024-07-25 13:21:38.199521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:46.026 [2024-07-25 13:21:38.199546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:25:46.026 [2024-07-25 13:21:38.199557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.026 [2024-07-25 13:21:38.210180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.026 [2024-07-25 13:21:38.210232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:46.026 [2024-07-25 13:21:38.210252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.598 ms 00:25:46.026 [2024-07-25 13:21:38.210264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.286 [2024-07-25 13:21:38.216910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.286 [2024-07-25 13:21:38.216946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:46.286 [2024-07-25 13:21:38.216976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.620 ms 00:25:46.286 [2024-07-25 13:21:38.217010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.286 [2024-07-25 13:21:38.248654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.286 [2024-07-25 13:21:38.248724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:46.286 [2024-07-25 13:21:38.248745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.570 ms 00:25:46.286 [2024-07-25 13:21:38.248758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.286 [2024-07-25 13:21:38.267421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.286 [2024-07-25 13:21:38.267493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:46.286 [2024-07-25 13:21:38.267514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.622 ms 00:25:46.286 [2024-07-25 13:21:38.267526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.286 [2024-07-25 13:21:38.270511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.286 [2024-07-25 13:21:38.270559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:46.286 [2024-07-25 13:21:38.270577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.912 ms 00:25:46.286 [2024-07-25 13:21:38.270588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.286 [2024-07-25 13:21:38.302523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.286 [2024-07-25 13:21:38.302578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:46.286 [2024-07-25 13:21:38.302599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.910 ms 00:25:46.286 [2024-07-25 13:21:38.302611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.286 [2024-07-25 13:21:38.334081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.286 [2024-07-25 13:21:38.334153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:46.286 [2024-07-25 13:21:38.334175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.436 ms 00:25:46.286 [2024-07-25 13:21:38.334186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.286 [2024-07-25 13:21:38.365496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.286 [2024-07-25 13:21:38.365562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:46.286 [2024-07-25 13:21:38.365584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.268 ms 00:25:46.286 [2024-07-25 13:21:38.365612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.286 [2024-07-25 13:21:38.396720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.286 [2024-07-25 13:21:38.396771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:46.286 [2024-07-25 13:21:38.396791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.017 ms 00:25:46.286 [2024-07-25 13:21:38.396803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.286 [2024-07-25 13:21:38.396832] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:46.286 [2024-07-25 13:21:38.396854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:46.286 [2024-07-25 13:21:38.396870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3328 / 261120 wr_cnt: 1 state: open 00:25:46.286 [2024-07-25 13:21:38.396883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:46.286 [2024-07-25 13:21:38.396894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:46.286 [2024-07-25 13:21:38.396906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:46.286 [2024-07-25 13:21:38.396918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.396930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.396941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.396953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.396977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.396990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:46.287 [2024-07-25 13:21:38.397996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:46.288 [2024-07-25 13:21:38.398011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:46.288 [2024-07-25 13:21:38.398023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:46.288 [2024-07-25 13:21:38.398036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:46.288 [2024-07-25 13:21:38.398047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:46.288 [2024-07-25 13:21:38.398059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:46.288 [2024-07-25 13:21:38.398080] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:46.288 [2024-07-25 13:21:38.398092] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 766e89cb-ea00-4368-85a7-fe9fb3737ad0 00:25:46.288 [2024-07-25 13:21:38.398114] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264448 00:25:46.288 [2024-07-25 13:21:38.398132] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 135616 00:25:46.288 [2024-07-25 13:21:38.398144] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 133632 00:25:46.288 [2024-07-25 13:21:38.398156] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0148 00:25:46.288 [2024-07-25 13:21:38.398170] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:46.288 [2024-07-25 13:21:38.398182] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:46.288 [2024-07-25 13:21:38.398193] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:46.288 [2024-07-25 13:21:38.398203] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:46.288 [2024-07-25 13:21:38.398213] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:46.288 [2024-07-25 13:21:38.398224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.288 [2024-07-25 13:21:38.398235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:46.288 [2024-07-25 13:21:38.398247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.393 ms 00:25:46.288 [2024-07-25 13:21:38.398258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.288 [2024-07-25 13:21:38.414798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.288 [2024-07-25 13:21:38.414846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:46.288 [2024-07-25 13:21:38.414872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.495 ms 00:25:46.288 [2024-07-25 13:21:38.414898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.288 [2024-07-25 13:21:38.415356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.288 [2024-07-25 13:21:38.415392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:46.288 [2024-07-25 13:21:38.415408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.428 ms 00:25:46.288 [2024-07-25 13:21:38.415419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.288 [2024-07-25 13:21:38.452327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.288 [2024-07-25 13:21:38.452393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:46.288 [2024-07-25 13:21:38.452412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.288 [2024-07-25 13:21:38.452424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.288 [2024-07-25 13:21:38.452504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.288 [2024-07-25 13:21:38.452520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:46.288 [2024-07-25 13:21:38.452532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.288 [2024-07-25 13:21:38.452543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.288 [2024-07-25 13:21:38.452639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.288 [2024-07-25 13:21:38.452665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:46.288 [2024-07-25 13:21:38.452678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.288 [2024-07-25 13:21:38.452688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.288 [2024-07-25 13:21:38.452712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.288 [2024-07-25 13:21:38.452726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:46.288 [2024-07-25 13:21:38.452738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.288 [2024-07-25 13:21:38.452749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.546 [2024-07-25 13:21:38.551824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.546 [2024-07-25 13:21:38.551910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:46.546 [2024-07-25 13:21:38.551932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.546 [2024-07-25 13:21:38.551944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.546 [2024-07-25 13:21:38.636092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.546 [2024-07-25 13:21:38.636180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:46.546 [2024-07-25 13:21:38.636202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.546 [2024-07-25 13:21:38.636214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.546 [2024-07-25 13:21:38.636318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.546 [2024-07-25 13:21:38.636337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:46.546 [2024-07-25 13:21:38.636358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.546 [2024-07-25 13:21:38.636369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.546 [2024-07-25 13:21:38.636418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.546 [2024-07-25 13:21:38.636451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:46.546 [2024-07-25 13:21:38.636464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.546 [2024-07-25 13:21:38.636475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.546 [2024-07-25 13:21:38.636605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.546 [2024-07-25 13:21:38.636626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:46.546 [2024-07-25 13:21:38.636638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.546 [2024-07-25 13:21:38.636656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.546 [2024-07-25 13:21:38.636709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.546 [2024-07-25 13:21:38.636747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:46.546 [2024-07-25 13:21:38.636762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.546 [2024-07-25 13:21:38.636773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.546 [2024-07-25 13:21:38.636819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.546 [2024-07-25 13:21:38.636835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:46.546 [2024-07-25 13:21:38.636847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.546 [2024-07-25 13:21:38.636857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.546 [2024-07-25 13:21:38.636917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.546 [2024-07-25 13:21:38.636933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:46.546 [2024-07-25 13:21:38.636945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.546 [2024-07-25 13:21:38.636968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.546 [2024-07-25 13:21:38.637135] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 442.070 ms, result 0 00:25:47.922 00:25:47.922 00:25:47.922 13:21:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:49.822 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:49.822 13:21:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:49.822 [2024-07-25 13:21:42.005549] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:49.822 [2024-07-25 13:21:42.005699] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83815 ] 00:25:50.080 [2024-07-25 13:21:42.175652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.338 [2024-07-25 13:21:42.397274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.596 [2024-07-25 13:21:42.707302] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:50.596 [2024-07-25 13:21:42.707382] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:50.856 [2024-07-25 13:21:42.868262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.856 [2024-07-25 13:21:42.868328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:50.856 [2024-07-25 13:21:42.868348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:50.856 [2024-07-25 13:21:42.868361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.856 [2024-07-25 13:21:42.868431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.856 [2024-07-25 13:21:42.868450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:50.856 [2024-07-25 13:21:42.868463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:25:50.856 [2024-07-25 13:21:42.868477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.856 [2024-07-25 13:21:42.868514] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:50.856 [2024-07-25 13:21:42.869448] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:50.856 [2024-07-25 13:21:42.869487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.856 [2024-07-25 13:21:42.869501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:50.856 [2024-07-25 13:21:42.869513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.984 ms 00:25:50.856 [2024-07-25 13:21:42.869525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.856 [2024-07-25 13:21:42.870672] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:50.856 [2024-07-25 13:21:42.886818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.856 [2024-07-25 13:21:42.886864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:50.856 [2024-07-25 13:21:42.886883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.147 ms 00:25:50.856 [2024-07-25 13:21:42.886895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.856 [2024-07-25 13:21:42.886972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.856 [2024-07-25 13:21:42.886995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:50.856 [2024-07-25 13:21:42.887008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:25:50.856 [2024-07-25 13:21:42.887019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.856 [2024-07-25 13:21:42.892205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.856 [2024-07-25 13:21:42.892279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:50.856 [2024-07-25 13:21:42.892307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.054 ms 00:25:50.856 [2024-07-25 13:21:42.892326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.856 [2024-07-25 13:21:42.892480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.856 [2024-07-25 13:21:42.892512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:50.856 [2024-07-25 13:21:42.892538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:25:50.856 [2024-07-25 13:21:42.892560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.856 [2024-07-25 13:21:42.892677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.856 [2024-07-25 13:21:42.892706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:50.856 [2024-07-25 13:21:42.892728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:25:50.856 [2024-07-25 13:21:42.892748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.856 [2024-07-25 13:21:42.892804] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:50.856 [2024-07-25 13:21:42.899131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.856 [2024-07-25 13:21:42.899193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:50.856 [2024-07-25 13:21:42.899226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.339 ms 00:25:50.857 [2024-07-25 13:21:42.899251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.857 [2024-07-25 13:21:42.899334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.857 [2024-07-25 13:21:42.899362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:50.857 [2024-07-25 13:21:42.899385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:50.857 [2024-07-25 13:21:42.899407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.857 [2024-07-25 13:21:42.899518] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:50.857 [2024-07-25 13:21:42.899568] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:50.857 [2024-07-25 13:21:42.899631] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:50.857 [2024-07-25 13:21:42.899675] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:25:50.857 [2024-07-25 13:21:42.899804] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:50.857 [2024-07-25 13:21:42.899831] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:50.857 [2024-07-25 13:21:42.899857] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:25:50.857 [2024-07-25 13:21:42.899884] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:50.857 [2024-07-25 13:21:42.899911] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:50.857 [2024-07-25 13:21:42.899934] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:50.857 [2024-07-25 13:21:42.899954] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:50.857 [2024-07-25 13:21:42.899973] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:50.857 [2024-07-25 13:21:42.899989] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:50.857 [2024-07-25 13:21:42.900010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.857 [2024-07-25 13:21:42.900039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:50.857 [2024-07-25 13:21:42.900059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.497 ms 00:25:50.857 [2024-07-25 13:21:42.900078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.857 [2024-07-25 13:21:42.900230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.857 [2024-07-25 13:21:42.900260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:50.857 [2024-07-25 13:21:42.900281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:25:50.857 [2024-07-25 13:21:42.900300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.857 [2024-07-25 13:21:42.900437] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:50.857 [2024-07-25 13:21:42.900468] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:50.857 [2024-07-25 13:21:42.900499] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:50.857 [2024-07-25 13:21:42.900519] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:50.857 [2024-07-25 13:21:42.900539] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:50.857 [2024-07-25 13:21:42.900560] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:50.857 [2024-07-25 13:21:42.900580] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:50.857 [2024-07-25 13:21:42.900598] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:50.857 [2024-07-25 13:21:42.900616] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:50.857 [2024-07-25 13:21:42.900633] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:50.857 [2024-07-25 13:21:42.900657] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:50.857 [2024-07-25 13:21:42.900676] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:50.857 [2024-07-25 13:21:42.900694] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:50.857 [2024-07-25 13:21:42.900712] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:50.857 [2024-07-25 13:21:42.900731] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:50.857 [2024-07-25 13:21:42.900750] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:50.857 [2024-07-25 13:21:42.900770] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:50.857 [2024-07-25 13:21:42.900788] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:50.857 [2024-07-25 13:21:42.900806] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:50.857 [2024-07-25 13:21:42.900823] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:50.857 [2024-07-25 13:21:42.900858] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:50.857 [2024-07-25 13:21:42.900876] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:50.857 [2024-07-25 13:21:42.900893] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:50.857 [2024-07-25 13:21:42.900911] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:50.857 [2024-07-25 13:21:42.900928] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:50.857 [2024-07-25 13:21:42.900944] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:50.857 [2024-07-25 13:21:42.900977] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:50.857 [2024-07-25 13:21:42.900996] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:50.857 [2024-07-25 13:21:42.901012] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:50.857 [2024-07-25 13:21:42.901029] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:50.857 [2024-07-25 13:21:42.901046] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:50.857 [2024-07-25 13:21:42.901063] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:50.857 [2024-07-25 13:21:42.901080] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:50.857 [2024-07-25 13:21:42.901096] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:50.857 [2024-07-25 13:21:42.901138] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:50.857 [2024-07-25 13:21:42.901158] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:50.857 [2024-07-25 13:21:42.901176] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:50.857 [2024-07-25 13:21:42.901202] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:50.857 [2024-07-25 13:21:42.901220] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:50.857 [2024-07-25 13:21:42.901237] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:50.857 [2024-07-25 13:21:42.901253] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:50.857 [2024-07-25 13:21:42.901271] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:50.857 [2024-07-25 13:21:42.901287] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:50.857 [2024-07-25 13:21:42.901305] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:50.857 [2024-07-25 13:21:42.901324] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:50.857 [2024-07-25 13:21:42.901343] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:50.857 [2024-07-25 13:21:42.901361] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:50.857 [2024-07-25 13:21:42.901380] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:50.857 [2024-07-25 13:21:42.901399] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:50.857 [2024-07-25 13:21:42.901416] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:50.857 [2024-07-25 13:21:42.901434] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:50.857 [2024-07-25 13:21:42.901451] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:50.857 [2024-07-25 13:21:42.901469] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:50.857 [2024-07-25 13:21:42.901488] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:50.857 [2024-07-25 13:21:42.901512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:50.857 [2024-07-25 13:21:42.901536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:50.857 [2024-07-25 13:21:42.901559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:50.857 [2024-07-25 13:21:42.901582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:50.857 [2024-07-25 13:21:42.901600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:50.857 [2024-07-25 13:21:42.901619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:50.857 [2024-07-25 13:21:42.901638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:50.857 [2024-07-25 13:21:42.901658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:50.857 [2024-07-25 13:21:42.901677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:50.857 [2024-07-25 13:21:42.901695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:50.858 [2024-07-25 13:21:42.901713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:50.858 [2024-07-25 13:21:42.901739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:50.858 [2024-07-25 13:21:42.901756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:50.858 [2024-07-25 13:21:42.901773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:50.858 [2024-07-25 13:21:42.901791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:50.858 [2024-07-25 13:21:42.901808] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:50.858 [2024-07-25 13:21:42.901826] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:50.858 [2024-07-25 13:21:42.901852] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:50.858 [2024-07-25 13:21:42.901869] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:50.858 [2024-07-25 13:21:42.901886] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:50.858 [2024-07-25 13:21:42.901902] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:50.858 [2024-07-25 13:21:42.901920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.858 [2024-07-25 13:21:42.901938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:50.858 [2024-07-25 13:21:42.901956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.558 ms 00:25:50.858 [2024-07-25 13:21:42.901972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.858 [2024-07-25 13:21:42.942879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.858 [2024-07-25 13:21:42.942936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:50.858 [2024-07-25 13:21:42.942957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.822 ms 00:25:50.858 [2024-07-25 13:21:42.942969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.858 [2024-07-25 13:21:42.943094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.858 [2024-07-25 13:21:42.943133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:50.858 [2024-07-25 13:21:42.943148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:25:50.858 [2024-07-25 13:21:42.943160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.858 [2024-07-25 13:21:42.981286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.858 [2024-07-25 13:21:42.981341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:50.858 [2024-07-25 13:21:42.981360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.024 ms 00:25:50.858 [2024-07-25 13:21:42.981373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.858 [2024-07-25 13:21:42.981441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.858 [2024-07-25 13:21:42.981459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:50.858 [2024-07-25 13:21:42.981472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:50.858 [2024-07-25 13:21:42.981490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.858 [2024-07-25 13:21:42.981890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.858 [2024-07-25 13:21:42.981920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:50.858 [2024-07-25 13:21:42.981934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:25:50.858 [2024-07-25 13:21:42.981946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.858 [2024-07-25 13:21:42.982120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.858 [2024-07-25 13:21:42.982141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:50.858 [2024-07-25 13:21:42.982154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:25:50.858 [2024-07-25 13:21:42.982167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.858 [2024-07-25 13:21:42.998004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.858 [2024-07-25 13:21:42.998050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:50.858 [2024-07-25 13:21:42.998067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.803 ms 00:25:50.858 [2024-07-25 13:21:42.998084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.858 [2024-07-25 13:21:43.014394] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:50.858 [2024-07-25 13:21:43.014440] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:50.858 [2024-07-25 13:21:43.014460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.858 [2024-07-25 13:21:43.014472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:50.858 [2024-07-25 13:21:43.014485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.223 ms 00:25:50.858 [2024-07-25 13:21:43.014497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.858 [2024-07-25 13:21:43.044134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.858 [2024-07-25 13:21:43.044184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:50.858 [2024-07-25 13:21:43.044204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.588 ms 00:25:50.858 [2024-07-25 13:21:43.044215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.117 [2024-07-25 13:21:43.059776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.117 [2024-07-25 13:21:43.059837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:51.117 [2024-07-25 13:21:43.059855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.509 ms 00:25:51.117 [2024-07-25 13:21:43.059866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.117 [2024-07-25 13:21:43.075336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.117 [2024-07-25 13:21:43.075423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:51.117 [2024-07-25 13:21:43.075445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.416 ms 00:25:51.117 [2024-07-25 13:21:43.075457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.117 [2024-07-25 13:21:43.076398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.117 [2024-07-25 13:21:43.076436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:51.117 [2024-07-25 13:21:43.076453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.695 ms 00:25:51.117 [2024-07-25 13:21:43.076465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.117 [2024-07-25 13:21:43.148773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.117 [2024-07-25 13:21:43.148852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:51.117 [2024-07-25 13:21:43.148874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.276 ms 00:25:51.117 [2024-07-25 13:21:43.148894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.117 [2024-07-25 13:21:43.161530] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:51.117 [2024-07-25 13:21:43.164220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.117 [2024-07-25 13:21:43.164257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:51.117 [2024-07-25 13:21:43.164276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.231 ms 00:25:51.117 [2024-07-25 13:21:43.164288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.117 [2024-07-25 13:21:43.164412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.117 [2024-07-25 13:21:43.164441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:51.117 [2024-07-25 13:21:43.164455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:51.117 [2024-07-25 13:21:43.164466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.117 [2024-07-25 13:21:43.165160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.117 [2024-07-25 13:21:43.165192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:51.117 [2024-07-25 13:21:43.165206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.630 ms 00:25:51.117 [2024-07-25 13:21:43.165218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.117 [2024-07-25 13:21:43.165255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.117 [2024-07-25 13:21:43.165271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:51.117 [2024-07-25 13:21:43.165283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:51.117 [2024-07-25 13:21:43.165295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.117 [2024-07-25 13:21:43.165335] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:51.117 [2024-07-25 13:21:43.165352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.117 [2024-07-25 13:21:43.165369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:51.117 [2024-07-25 13:21:43.165381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:25:51.117 [2024-07-25 13:21:43.165392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.117 [2024-07-25 13:21:43.196710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.117 [2024-07-25 13:21:43.196777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:51.117 [2024-07-25 13:21:43.196798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.292 ms 00:25:51.117 [2024-07-25 13:21:43.196819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.117 [2024-07-25 13:21:43.196914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.117 [2024-07-25 13:21:43.196934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:51.117 [2024-07-25 13:21:43.196948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:25:51.117 [2024-07-25 13:21:43.196970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.117 [2024-07-25 13:21:43.198215] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 329.427 ms, result 0 00:26:28.487  Copying: 28/1024 [MB] (28 MBps) Copying: 55/1024 [MB] (26 MBps) Copying: 83/1024 [MB] (27 MBps) Copying: 112/1024 [MB] (28 MBps) Copying: 138/1024 [MB] (26 MBps) Copying: 166/1024 [MB] (27 MBps) Copying: 195/1024 [MB] (28 MBps) Copying: 223/1024 [MB] (27 MBps) Copying: 251/1024 [MB] (28 MBps) Copying: 278/1024 [MB] (27 MBps) Copying: 304/1024 [MB] (25 MBps) Copying: 329/1024 [MB] (24 MBps) Copying: 356/1024 [MB] (27 MBps) Copying: 383/1024 [MB] (26 MBps) Copying: 411/1024 [MB] (28 MBps) Copying: 439/1024 [MB] (28 MBps) Copying: 467/1024 [MB] (27 MBps) Copying: 495/1024 [MB] (28 MBps) Copying: 523/1024 [MB] (28 MBps) Copying: 552/1024 [MB] (29 MBps) Copying: 578/1024 [MB] (26 MBps) Copying: 605/1024 [MB] (26 MBps) Copying: 633/1024 [MB] (27 MBps) Copying: 661/1024 [MB] (28 MBps) Copying: 689/1024 [MB] (28 MBps) Copying: 717/1024 [MB] (27 MBps) Copying: 745/1024 [MB] (28 MBps) Copying: 774/1024 [MB] (29 MBps) Copying: 802/1024 [MB] (28 MBps) Copying: 829/1024 [MB] (27 MBps) Copying: 855/1024 [MB] (26 MBps) Copying: 884/1024 [MB] (28 MBps) Copying: 911/1024 [MB] (26 MBps) Copying: 939/1024 [MB] (27 MBps) Copying: 968/1024 [MB] (29 MBps) Copying: 994/1024 [MB] (26 MBps) Copying: 1022/1024 [MB] (27 MBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-07-25 13:22:20.501236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.487 [2024-07-25 13:22:20.501321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:28.487 [2024-07-25 13:22:20.501343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:28.487 [2024-07-25 13:22:20.501356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.487 [2024-07-25 13:22:20.501388] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:28.487 [2024-07-25 13:22:20.505638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.487 [2024-07-25 13:22:20.505674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:28.487 [2024-07-25 13:22:20.505691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.226 ms 00:26:28.487 [2024-07-25 13:22:20.505709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.487 [2024-07-25 13:22:20.505969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.487 [2024-07-25 13:22:20.506008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:28.487 [2024-07-25 13:22:20.506024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:26:28.487 [2024-07-25 13:22:20.506036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.487 [2024-07-25 13:22:20.509601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.487 [2024-07-25 13:22:20.509631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:28.487 [2024-07-25 13:22:20.509646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.544 ms 00:26:28.487 [2024-07-25 13:22:20.509658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.487 [2024-07-25 13:22:20.517256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.487 [2024-07-25 13:22:20.517319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:28.487 [2024-07-25 13:22:20.517335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.561 ms 00:26:28.487 [2024-07-25 13:22:20.517346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.487 [2024-07-25 13:22:20.550004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.487 [2024-07-25 13:22:20.550049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:28.487 [2024-07-25 13:22:20.550068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.529 ms 00:26:28.487 [2024-07-25 13:22:20.550080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.487 [2024-07-25 13:22:20.567854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.487 [2024-07-25 13:22:20.567900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:28.487 [2024-07-25 13:22:20.567919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.713 ms 00:26:28.487 [2024-07-25 13:22:20.567932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.487 [2024-07-25 13:22:20.571311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.487 [2024-07-25 13:22:20.571352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:28.487 [2024-07-25 13:22:20.571377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.345 ms 00:26:28.487 [2024-07-25 13:22:20.571389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.487 [2024-07-25 13:22:20.602893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.487 [2024-07-25 13:22:20.602966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:28.487 [2024-07-25 13:22:20.602995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.478 ms 00:26:28.487 [2024-07-25 13:22:20.603011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.487 [2024-07-25 13:22:20.634367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.487 [2024-07-25 13:22:20.634427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:28.487 [2024-07-25 13:22:20.634446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.211 ms 00:26:28.487 [2024-07-25 13:22:20.634458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.487 [2024-07-25 13:22:20.665219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.487 [2024-07-25 13:22:20.665272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:28.487 [2024-07-25 13:22:20.665308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.703 ms 00:26:28.487 [2024-07-25 13:22:20.665320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.789 [2024-07-25 13:22:20.696195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.789 [2024-07-25 13:22:20.696244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:28.789 [2024-07-25 13:22:20.696263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.758 ms 00:26:28.789 [2024-07-25 13:22:20.696275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.789 [2024-07-25 13:22:20.696358] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:28.789 [2024-07-25 13:22:20.696399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:28.789 [2024-07-25 13:22:20.696417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3328 / 261120 wr_cnt: 1 state: open 00:26:28.789 [2024-07-25 13:22:20.696430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.696992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.697004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.697016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.697028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.697040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.697052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.697064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.697076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.697088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.697100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.697127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.697141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.697153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.697165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.697194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.697206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.697218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.697230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.697242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.697254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.697266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.697278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.697290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:28.789 [2024-07-25 13:22:20.697302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:28.790 [2024-07-25 13:22:20.697314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:28.790 [2024-07-25 13:22:20.697326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:28.790 [2024-07-25 13:22:20.697338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:28.790 [2024-07-25 13:22:20.697350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:28.790 [2024-07-25 13:22:20.697361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:28.790 [2024-07-25 13:22:20.697373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:28.790 [2024-07-25 13:22:20.697385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:28.790 [2024-07-25 13:22:20.697397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:28.790 [2024-07-25 13:22:20.697409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:28.790 [2024-07-25 13:22:20.697421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:28.790 [2024-07-25 13:22:20.697433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:28.790 [2024-07-25 13:22:20.697445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:28.790 [2024-07-25 13:22:20.697457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:28.790 [2024-07-25 13:22:20.697469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:28.790 [2024-07-25 13:22:20.697481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:28.790 [2024-07-25 13:22:20.697493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:28.790 [2024-07-25 13:22:20.697505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:28.790 [2024-07-25 13:22:20.697517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:28.790 [2024-07-25 13:22:20.697529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:28.790 [2024-07-25 13:22:20.697541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:28.790 [2024-07-25 13:22:20.697552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:28.790 [2024-07-25 13:22:20.697564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:28.790 [2024-07-25 13:22:20.697579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:28.790 [2024-07-25 13:22:20.697591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:28.790 [2024-07-25 13:22:20.697603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:28.790 [2024-07-25 13:22:20.697615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:28.790 [2024-07-25 13:22:20.697627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:28.790 [2024-07-25 13:22:20.697639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:28.790 [2024-07-25 13:22:20.697651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:28.790 [2024-07-25 13:22:20.697663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:28.790 [2024-07-25 13:22:20.697684] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:28.790 [2024-07-25 13:22:20.697697] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 766e89cb-ea00-4368-85a7-fe9fb3737ad0 00:26:28.790 [2024-07-25 13:22:20.697716] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264448 00:26:28.790 [2024-07-25 13:22:20.697727] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:28.790 [2024-07-25 13:22:20.697738] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:28.790 [2024-07-25 13:22:20.697750] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:28.790 [2024-07-25 13:22:20.697760] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:28.790 [2024-07-25 13:22:20.697772] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:28.790 [2024-07-25 13:22:20.697783] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:28.790 [2024-07-25 13:22:20.697793] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:28.790 [2024-07-25 13:22:20.697803] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:28.790 [2024-07-25 13:22:20.697815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.790 [2024-07-25 13:22:20.697826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:28.790 [2024-07-25 13:22:20.697844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.460 ms 00:26:28.790 [2024-07-25 13:22:20.697855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.790 [2024-07-25 13:22:20.714477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.790 [2024-07-25 13:22:20.714544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:28.790 [2024-07-25 13:22:20.714583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.572 ms 00:26:28.790 [2024-07-25 13:22:20.714596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.790 [2024-07-25 13:22:20.715053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.790 [2024-07-25 13:22:20.715080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:28.790 [2024-07-25 13:22:20.715095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:26:28.790 [2024-07-25 13:22:20.715134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.790 [2024-07-25 13:22:20.752217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.790 [2024-07-25 13:22:20.752276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:28.790 [2024-07-25 13:22:20.752310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.790 [2024-07-25 13:22:20.752322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.790 [2024-07-25 13:22:20.752400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.790 [2024-07-25 13:22:20.752416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:28.790 [2024-07-25 13:22:20.752428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.790 [2024-07-25 13:22:20.752445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.790 [2024-07-25 13:22:20.752566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.790 [2024-07-25 13:22:20.752597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:28.790 [2024-07-25 13:22:20.752611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.790 [2024-07-25 13:22:20.752623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.790 [2024-07-25 13:22:20.752647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.790 [2024-07-25 13:22:20.752661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:28.790 [2024-07-25 13:22:20.752673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.790 [2024-07-25 13:22:20.752684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.790 [2024-07-25 13:22:20.851651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.790 [2024-07-25 13:22:20.851722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:28.790 [2024-07-25 13:22:20.851743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.790 [2024-07-25 13:22:20.851755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.790 [2024-07-25 13:22:20.936178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.790 [2024-07-25 13:22:20.936255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:28.790 [2024-07-25 13:22:20.936275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.790 [2024-07-25 13:22:20.936296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.790 [2024-07-25 13:22:20.936381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.790 [2024-07-25 13:22:20.936399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:28.790 [2024-07-25 13:22:20.936412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.790 [2024-07-25 13:22:20.936423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.790 [2024-07-25 13:22:20.936503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.790 [2024-07-25 13:22:20.936520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:28.790 [2024-07-25 13:22:20.936532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.790 [2024-07-25 13:22:20.936544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.790 [2024-07-25 13:22:20.936669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.790 [2024-07-25 13:22:20.936700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:28.790 [2024-07-25 13:22:20.936714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.790 [2024-07-25 13:22:20.936725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.790 [2024-07-25 13:22:20.936777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.790 [2024-07-25 13:22:20.936796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:28.790 [2024-07-25 13:22:20.936808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.790 [2024-07-25 13:22:20.936819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.790 [2024-07-25 13:22:20.936871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.790 [2024-07-25 13:22:20.936897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:28.790 [2024-07-25 13:22:20.936911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.790 [2024-07-25 13:22:20.936922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.790 [2024-07-25 13:22:20.936987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.790 [2024-07-25 13:22:20.937005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:28.790 [2024-07-25 13:22:20.937017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.790 [2024-07-25 13:22:20.937029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.790 [2024-07-25 13:22:20.937190] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 435.928 ms, result 0 00:26:30.188 00:26:30.188 00:26:30.188 13:22:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:32.091 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:26:32.091 13:22:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:26:32.091 13:22:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:26:32.091 13:22:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:32.091 13:22:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:32.350 13:22:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:32.350 13:22:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:32.350 13:22:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:32.350 13:22:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 82045 00:26:32.350 13:22:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 82045 ']' 00:26:32.350 13:22:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 82045 00:26:32.350 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (82045) - No such process 00:26:32.350 Process with pid 82045 is not found 00:26:32.350 13:22:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 82045 is not found' 00:26:32.350 13:22:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:26:32.608 13:22:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:26:32.608 Remove shared memory files 00:26:32.608 13:22:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:32.608 13:22:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:26:32.608 13:22:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:26:32.608 13:22:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:26:32.608 13:22:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:32.867 13:22:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:26:32.867 ************************************ 00:26:32.867 END TEST ftl_dirty_shutdown 00:26:32.867 ************************************ 00:26:32.867 00:26:32.867 real 3m36.814s 00:26:32.867 user 4m8.412s 00:26:32.867 sys 0m36.977s 00:26:32.867 13:22:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:32.867 13:22:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:32.867 13:22:24 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:32.867 13:22:24 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:26:32.867 13:22:24 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:32.867 13:22:24 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:32.867 ************************************ 00:26:32.867 START TEST ftl_upgrade_shutdown 00:26:32.867 ************************************ 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:32.867 * Looking for test storage... 00:26:32.867 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84303 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84303 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 84303 ']' 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:32.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:32.867 13:22:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:33.126 [2024-07-25 13:22:25.086647] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:33.126 [2024-07-25 13:22:25.086886] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84303 ] 00:26:33.126 [2024-07-25 13:22:25.280527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.384 [2024-07-25 13:22:25.537270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.319 13:22:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:34.319 13:22:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:26:34.319 13:22:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:34.319 13:22:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:26:34.319 13:22:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:26:34.319 13:22:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:34.319 13:22:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:26:34.319 13:22:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:34.319 13:22:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:26:34.319 13:22:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:34.319 13:22:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:26:34.319 13:22:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:34.319 13:22:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:26:34.319 13:22:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:34.319 13:22:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:26:34.319 13:22:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:34.319 13:22:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:26:34.319 13:22:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:26:34.319 13:22:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:26:34.319 13:22:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:34.319 13:22:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:26:34.319 13:22:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:26:34.319 13:22:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:26:34.576 13:22:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:26:34.576 13:22:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:26:34.576 13:22:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:26:34.576 13:22:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:26:34.576 13:22:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:34.576 13:22:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:26:34.576 13:22:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:26:34.576 13:22:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:26:34.835 13:22:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:34.835 { 00:26:34.835 "name": "basen1", 00:26:34.835 "aliases": [ 00:26:34.835 "feeb8616-e851-49e8-b936-4568e1af3006" 00:26:34.835 ], 00:26:34.835 "product_name": "NVMe disk", 00:26:34.835 "block_size": 4096, 00:26:34.835 "num_blocks": 1310720, 00:26:34.835 "uuid": "feeb8616-e851-49e8-b936-4568e1af3006", 00:26:34.835 "assigned_rate_limits": { 00:26:34.835 "rw_ios_per_sec": 0, 00:26:34.835 "rw_mbytes_per_sec": 0, 00:26:34.835 "r_mbytes_per_sec": 0, 00:26:34.835 "w_mbytes_per_sec": 0 00:26:34.835 }, 00:26:34.835 "claimed": true, 00:26:34.835 "claim_type": "read_many_write_one", 00:26:34.835 "zoned": false, 00:26:34.835 "supported_io_types": { 00:26:34.835 "read": true, 00:26:34.835 "write": true, 00:26:34.835 "unmap": true, 00:26:34.835 "flush": true, 00:26:34.835 "reset": true, 00:26:34.835 "nvme_admin": true, 00:26:34.835 "nvme_io": true, 00:26:34.835 "nvme_io_md": false, 00:26:34.835 "write_zeroes": true, 00:26:34.835 "zcopy": false, 00:26:34.835 "get_zone_info": false, 00:26:34.835 "zone_management": false, 00:26:34.835 "zone_append": false, 00:26:34.835 "compare": true, 00:26:34.835 "compare_and_write": false, 00:26:34.835 "abort": true, 00:26:34.835 "seek_hole": false, 00:26:34.835 "seek_data": false, 00:26:34.835 "copy": true, 00:26:34.835 "nvme_iov_md": false 00:26:34.835 }, 00:26:34.835 "driver_specific": { 00:26:34.835 "nvme": [ 00:26:34.835 { 00:26:34.835 "pci_address": "0000:00:11.0", 00:26:34.835 "trid": { 00:26:34.835 "trtype": "PCIe", 00:26:34.835 "traddr": "0000:00:11.0" 00:26:34.835 }, 00:26:34.835 "ctrlr_data": { 00:26:34.835 "cntlid": 0, 00:26:34.835 "vendor_id": "0x1b36", 00:26:34.835 "model_number": "QEMU NVMe Ctrl", 00:26:34.835 "serial_number": "12341", 00:26:34.835 "firmware_revision": "8.0.0", 00:26:34.835 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:34.835 "oacs": { 00:26:34.835 "security": 0, 00:26:34.835 "format": 1, 00:26:34.835 "firmware": 0, 00:26:34.835 "ns_manage": 1 00:26:34.835 }, 00:26:34.835 "multi_ctrlr": false, 00:26:34.835 "ana_reporting": false 00:26:34.835 }, 00:26:34.835 "vs": { 00:26:34.835 "nvme_version": "1.4" 00:26:34.835 }, 00:26:34.835 "ns_data": { 00:26:34.835 "id": 1, 00:26:34.835 "can_share": false 00:26:34.835 } 00:26:34.835 } 00:26:34.835 ], 00:26:34.835 "mp_policy": "active_passive" 00:26:34.835 } 00:26:34.835 } 00:26:34.835 ]' 00:26:34.835 13:22:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:34.835 13:22:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:26:34.835 13:22:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:34.835 13:22:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:26:34.835 13:22:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:26:34.835 13:22:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:26:34.835 13:22:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:26:34.835 13:22:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:26:34.835 13:22:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:26:34.835 13:22:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:34.835 13:22:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:35.093 13:22:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=a91feac6-3a51-4201-b2e9-28572dc6d0ff 00:26:35.093 13:22:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:26:35.093 13:22:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a91feac6-3a51-4201-b2e9-28572dc6d0ff 00:26:35.351 13:22:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:26:35.609 13:22:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=f756fef7-44e3-4ea7-b31c-ba882962e7c9 00:26:35.609 13:22:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u f756fef7-44e3-4ea7-b31c-ba882962e7c9 00:26:35.868 13:22:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=f8bebe0e-abe0-4816-9ebf-a6edca55ccd9 00:26:35.868 13:22:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z f8bebe0e-abe0-4816-9ebf-a6edca55ccd9 ]] 00:26:35.868 13:22:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 f8bebe0e-abe0-4816-9ebf-a6edca55ccd9 5120 00:26:35.868 13:22:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:26:35.868 13:22:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:35.868 13:22:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=f8bebe0e-abe0-4816-9ebf-a6edca55ccd9 00:26:35.868 13:22:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:26:35.868 13:22:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size f8bebe0e-abe0-4816-9ebf-a6edca55ccd9 00:26:35.868 13:22:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=f8bebe0e-abe0-4816-9ebf-a6edca55ccd9 00:26:35.868 13:22:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:35.868 13:22:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:26:35.868 13:22:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:26:35.868 13:22:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f8bebe0e-abe0-4816-9ebf-a6edca55ccd9 00:26:36.126 13:22:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:36.126 { 00:26:36.126 "name": "f8bebe0e-abe0-4816-9ebf-a6edca55ccd9", 00:26:36.126 "aliases": [ 00:26:36.126 "lvs/basen1p0" 00:26:36.126 ], 00:26:36.126 "product_name": "Logical Volume", 00:26:36.126 "block_size": 4096, 00:26:36.126 "num_blocks": 5242880, 00:26:36.126 "uuid": "f8bebe0e-abe0-4816-9ebf-a6edca55ccd9", 00:26:36.126 "assigned_rate_limits": { 00:26:36.126 "rw_ios_per_sec": 0, 00:26:36.126 "rw_mbytes_per_sec": 0, 00:26:36.126 "r_mbytes_per_sec": 0, 00:26:36.126 "w_mbytes_per_sec": 0 00:26:36.126 }, 00:26:36.126 "claimed": false, 00:26:36.126 "zoned": false, 00:26:36.126 "supported_io_types": { 00:26:36.126 "read": true, 00:26:36.126 "write": true, 00:26:36.126 "unmap": true, 00:26:36.126 "flush": false, 00:26:36.126 "reset": true, 00:26:36.126 "nvme_admin": false, 00:26:36.126 "nvme_io": false, 00:26:36.126 "nvme_io_md": false, 00:26:36.126 "write_zeroes": true, 00:26:36.126 "zcopy": false, 00:26:36.126 "get_zone_info": false, 00:26:36.126 "zone_management": false, 00:26:36.126 "zone_append": false, 00:26:36.126 "compare": false, 00:26:36.126 "compare_and_write": false, 00:26:36.126 "abort": false, 00:26:36.126 "seek_hole": true, 00:26:36.126 "seek_data": true, 00:26:36.126 "copy": false, 00:26:36.126 "nvme_iov_md": false 00:26:36.126 }, 00:26:36.126 "driver_specific": { 00:26:36.126 "lvol": { 00:26:36.126 "lvol_store_uuid": "f756fef7-44e3-4ea7-b31c-ba882962e7c9", 00:26:36.126 "base_bdev": "basen1", 00:26:36.126 "thin_provision": true, 00:26:36.126 "num_allocated_clusters": 0, 00:26:36.126 "snapshot": false, 00:26:36.126 "clone": false, 00:26:36.126 "esnap_clone": false 00:26:36.126 } 00:26:36.126 } 00:26:36.126 } 00:26:36.126 ]' 00:26:36.126 13:22:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:36.385 13:22:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:26:36.385 13:22:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:36.385 13:22:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:26:36.385 13:22:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:26:36.385 13:22:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:26:36.385 13:22:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:26:36.385 13:22:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:26:36.385 13:22:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:26:36.643 13:22:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:26:36.643 13:22:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:26:36.643 13:22:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:26:36.904 13:22:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:26:36.904 13:22:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:26:36.904 13:22:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d f8bebe0e-abe0-4816-9ebf-a6edca55ccd9 -c cachen1p0 --l2p_dram_limit 2 00:26:37.172 [2024-07-25 13:22:29.212265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.172 [2024-07-25 13:22:29.212333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:37.172 [2024-07-25 13:22:29.212355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:37.172 [2024-07-25 13:22:29.212369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.172 [2024-07-25 13:22:29.212450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.172 [2024-07-25 13:22:29.212471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:37.172 [2024-07-25 13:22:29.212484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:26:37.172 [2024-07-25 13:22:29.212497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.172 [2024-07-25 13:22:29.212527] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:37.172 [2024-07-25 13:22:29.213543] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:37.172 [2024-07-25 13:22:29.213580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.172 [2024-07-25 13:22:29.213599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:37.172 [2024-07-25 13:22:29.213613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.060 ms 00:26:37.172 [2024-07-25 13:22:29.213629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.172 [2024-07-25 13:22:29.213806] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID b0d8f7f3-bdb3-4318-bcb9-6fd0008eaba3 00:26:37.172 [2024-07-25 13:22:29.214892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.172 [2024-07-25 13:22:29.214933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:26:37.172 [2024-07-25 13:22:29.214953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:26:37.172 [2024-07-25 13:22:29.214966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.172 [2024-07-25 13:22:29.219525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.172 [2024-07-25 13:22:29.219573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:37.172 [2024-07-25 13:22:29.219593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.483 ms 00:26:37.172 [2024-07-25 13:22:29.219606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.172 [2024-07-25 13:22:29.219673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.172 [2024-07-25 13:22:29.219692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:37.172 [2024-07-25 13:22:29.219707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:26:37.172 [2024-07-25 13:22:29.219718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.172 [2024-07-25 13:22:29.219838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.172 [2024-07-25 13:22:29.219878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:37.172 [2024-07-25 13:22:29.219899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:26:37.172 [2024-07-25 13:22:29.219911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.172 [2024-07-25 13:22:29.219948] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:37.172 [2024-07-25 13:22:29.224469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.172 [2024-07-25 13:22:29.224512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:37.172 [2024-07-25 13:22:29.224529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.533 ms 00:26:37.172 [2024-07-25 13:22:29.224545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.172 [2024-07-25 13:22:29.224583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.172 [2024-07-25 13:22:29.224601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:37.172 [2024-07-25 13:22:29.224615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:37.172 [2024-07-25 13:22:29.224628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.172 [2024-07-25 13:22:29.224672] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:26:37.172 [2024-07-25 13:22:29.224837] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:37.172 [2024-07-25 13:22:29.224867] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:37.172 [2024-07-25 13:22:29.224890] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:26:37.172 [2024-07-25 13:22:29.224905] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:37.172 [2024-07-25 13:22:29.224921] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:37.172 [2024-07-25 13:22:29.224934] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:37.172 [2024-07-25 13:22:29.224950] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:37.172 [2024-07-25 13:22:29.224971] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:37.172 [2024-07-25 13:22:29.224987] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:37.172 [2024-07-25 13:22:29.225000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.172 [2024-07-25 13:22:29.225013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:37.172 [2024-07-25 13:22:29.225025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.330 ms 00:26:37.172 [2024-07-25 13:22:29.225038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.172 [2024-07-25 13:22:29.225150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.172 [2024-07-25 13:22:29.225176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:37.172 [2024-07-25 13:22:29.225189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.080 ms 00:26:37.172 [2024-07-25 13:22:29.225205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.172 [2024-07-25 13:22:29.225328] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:37.172 [2024-07-25 13:22:29.225353] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:37.172 [2024-07-25 13:22:29.225366] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:37.172 [2024-07-25 13:22:29.225380] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:37.172 [2024-07-25 13:22:29.225392] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:37.172 [2024-07-25 13:22:29.225405] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:37.172 [2024-07-25 13:22:29.225429] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:37.172 [2024-07-25 13:22:29.225443] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:37.172 [2024-07-25 13:22:29.225454] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:37.172 [2024-07-25 13:22:29.225466] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:37.172 [2024-07-25 13:22:29.225477] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:37.172 [2024-07-25 13:22:29.225490] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:37.172 [2024-07-25 13:22:29.225501] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:37.172 [2024-07-25 13:22:29.225516] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:37.172 [2024-07-25 13:22:29.225526] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:37.172 [2024-07-25 13:22:29.225539] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:37.172 [2024-07-25 13:22:29.225549] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:37.172 [2024-07-25 13:22:29.225565] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:37.172 [2024-07-25 13:22:29.225576] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:37.172 [2024-07-25 13:22:29.225589] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:37.172 [2024-07-25 13:22:29.225601] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:37.172 [2024-07-25 13:22:29.225614] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:37.172 [2024-07-25 13:22:29.225625] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:37.172 [2024-07-25 13:22:29.225638] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:37.172 [2024-07-25 13:22:29.225649] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:37.172 [2024-07-25 13:22:29.225661] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:37.172 [2024-07-25 13:22:29.225672] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:37.172 [2024-07-25 13:22:29.225684] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:37.172 [2024-07-25 13:22:29.225695] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:37.172 [2024-07-25 13:22:29.225709] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:37.172 [2024-07-25 13:22:29.225719] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:37.172 [2024-07-25 13:22:29.225732] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:37.172 [2024-07-25 13:22:29.225743] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:37.172 [2024-07-25 13:22:29.225757] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:37.172 [2024-07-25 13:22:29.225768] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:37.172 [2024-07-25 13:22:29.225781] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:37.172 [2024-07-25 13:22:29.225791] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:37.172 [2024-07-25 13:22:29.225804] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:37.172 [2024-07-25 13:22:29.225814] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:37.172 [2024-07-25 13:22:29.225828] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:37.172 [2024-07-25 13:22:29.225839] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:37.172 [2024-07-25 13:22:29.225852] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:37.172 [2024-07-25 13:22:29.225863] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:37.172 [2024-07-25 13:22:29.225875] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:37.172 [2024-07-25 13:22:29.225887] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:37.172 [2024-07-25 13:22:29.225901] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:37.172 [2024-07-25 13:22:29.225912] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:37.172 [2024-07-25 13:22:29.225926] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:37.172 [2024-07-25 13:22:29.225938] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:37.172 [2024-07-25 13:22:29.225952] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:37.172 [2024-07-25 13:22:29.225964] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:37.172 [2024-07-25 13:22:29.225977] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:37.172 [2024-07-25 13:22:29.225989] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:37.172 [2024-07-25 13:22:29.226005] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:37.172 [2024-07-25 13:22:29.226022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:37.172 [2024-07-25 13:22:29.226037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:37.172 [2024-07-25 13:22:29.226050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:37.172 [2024-07-25 13:22:29.226063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:37.172 [2024-07-25 13:22:29.226075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:37.172 [2024-07-25 13:22:29.226088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:37.172 [2024-07-25 13:22:29.226100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:37.172 [2024-07-25 13:22:29.226130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:37.172 [2024-07-25 13:22:29.226142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:37.172 [2024-07-25 13:22:29.226157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:37.172 [2024-07-25 13:22:29.226169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:37.172 [2024-07-25 13:22:29.226185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:37.172 [2024-07-25 13:22:29.226197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:37.172 [2024-07-25 13:22:29.226210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:37.172 [2024-07-25 13:22:29.226225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:37.172 [2024-07-25 13:22:29.226239] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:37.172 [2024-07-25 13:22:29.226252] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:37.172 [2024-07-25 13:22:29.226266] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:37.172 [2024-07-25 13:22:29.226278] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:37.172 [2024-07-25 13:22:29.226291] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:37.172 [2024-07-25 13:22:29.226303] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:37.172 [2024-07-25 13:22:29.226318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.172 [2024-07-25 13:22:29.226330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:37.172 [2024-07-25 13:22:29.226345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.054 ms 00:26:37.172 [2024-07-25 13:22:29.226356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.173 [2024-07-25 13:22:29.226412] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:26:37.173 [2024-07-25 13:22:29.226434] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:26:39.072 [2024-07-25 13:22:31.219211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.072 [2024-07-25 13:22:31.219286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:39.072 [2024-07-25 13:22:31.219311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1992.802 ms 00:26:39.072 [2024-07-25 13:22:31.219325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.072 [2024-07-25 13:22:31.251518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.072 [2024-07-25 13:22:31.251583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:39.072 [2024-07-25 13:22:31.251607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.925 ms 00:26:39.072 [2024-07-25 13:22:31.251621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.072 [2024-07-25 13:22:31.251759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.072 [2024-07-25 13:22:31.251789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:39.072 [2024-07-25 13:22:31.251810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:26:39.072 [2024-07-25 13:22:31.251822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.330 [2024-07-25 13:22:31.290530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.330 [2024-07-25 13:22:31.290587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:39.330 [2024-07-25 13:22:31.290610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.628 ms 00:26:39.330 [2024-07-25 13:22:31.290623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.330 [2024-07-25 13:22:31.290692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.330 [2024-07-25 13:22:31.290707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:39.330 [2024-07-25 13:22:31.290728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:39.330 [2024-07-25 13:22:31.290739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.331 [2024-07-25 13:22:31.291135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.331 [2024-07-25 13:22:31.291161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:39.331 [2024-07-25 13:22:31.291178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.309 ms 00:26:39.331 [2024-07-25 13:22:31.291190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.331 [2024-07-25 13:22:31.291255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.331 [2024-07-25 13:22:31.291275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:39.331 [2024-07-25 13:22:31.291290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:26:39.331 [2024-07-25 13:22:31.291301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.331 [2024-07-25 13:22:31.308684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.331 [2024-07-25 13:22:31.308742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:39.331 [2024-07-25 13:22:31.308764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.351 ms 00:26:39.331 [2024-07-25 13:22:31.308777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.331 [2024-07-25 13:22:31.322314] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:39.331 [2024-07-25 13:22:31.323238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.331 [2024-07-25 13:22:31.323284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:39.331 [2024-07-25 13:22:31.323304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.327 ms 00:26:39.331 [2024-07-25 13:22:31.323319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.331 [2024-07-25 13:22:31.357459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.331 [2024-07-25 13:22:31.357539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:26:39.331 [2024-07-25 13:22:31.357561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.088 ms 00:26:39.331 [2024-07-25 13:22:31.357576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.331 [2024-07-25 13:22:31.357703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.331 [2024-07-25 13:22:31.357727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:39.331 [2024-07-25 13:22:31.357742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.063 ms 00:26:39.331 [2024-07-25 13:22:31.357758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.331 [2024-07-25 13:22:31.388682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.331 [2024-07-25 13:22:31.388741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:26:39.331 [2024-07-25 13:22:31.388761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.822 ms 00:26:39.331 [2024-07-25 13:22:31.388778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.331 [2024-07-25 13:22:31.419457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.331 [2024-07-25 13:22:31.419526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:26:39.331 [2024-07-25 13:22:31.419546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.624 ms 00:26:39.331 [2024-07-25 13:22:31.419560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.331 [2024-07-25 13:22:31.420306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.331 [2024-07-25 13:22:31.420352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:39.331 [2024-07-25 13:22:31.420371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.694 ms 00:26:39.331 [2024-07-25 13:22:31.420384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.331 [2024-07-25 13:22:31.507413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.331 [2024-07-25 13:22:31.507490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:26:39.331 [2024-07-25 13:22:31.507512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 86.958 ms 00:26:39.331 [2024-07-25 13:22:31.507530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.589 [2024-07-25 13:22:31.539517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.589 [2024-07-25 13:22:31.539578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:26:39.589 [2024-07-25 13:22:31.539599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.928 ms 00:26:39.589 [2024-07-25 13:22:31.539615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.589 [2024-07-25 13:22:31.570713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.589 [2024-07-25 13:22:31.570767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:26:39.589 [2024-07-25 13:22:31.570798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.044 ms 00:26:39.589 [2024-07-25 13:22:31.570814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.589 [2024-07-25 13:22:31.602901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.589 [2024-07-25 13:22:31.602953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:39.589 [2024-07-25 13:22:31.602973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.034 ms 00:26:39.589 [2024-07-25 13:22:31.602987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.589 [2024-07-25 13:22:31.603047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.589 [2024-07-25 13:22:31.603070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:39.589 [2024-07-25 13:22:31.603084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:26:39.589 [2024-07-25 13:22:31.603100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.589 [2024-07-25 13:22:31.603243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.589 [2024-07-25 13:22:31.603269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:39.589 [2024-07-25 13:22:31.603284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:26:39.589 [2024-07-25 13:22:31.603299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.589 [2024-07-25 13:22:31.604345] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2391.556 ms, result 0 00:26:39.589 { 00:26:39.589 "name": "ftl", 00:26:39.589 "uuid": "b0d8f7f3-bdb3-4318-bcb9-6fd0008eaba3" 00:26:39.589 } 00:26:39.589 13:22:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:26:39.847 [2024-07-25 13:22:31.939725] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:39.847 13:22:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:26:40.105 13:22:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:26:40.364 [2024-07-25 13:22:32.480337] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:40.364 13:22:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:26:40.622 [2024-07-25 13:22:32.713797] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:40.622 13:22:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:26:41.192 13:22:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:26:41.192 13:22:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:26:41.192 13:22:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:26:41.192 13:22:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:26:41.192 13:22:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:26:41.192 13:22:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:26:41.192 13:22:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:26:41.192 13:22:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:26:41.192 13:22:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:26:41.192 13:22:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:41.192 Fill FTL, iteration 1 00:26:41.192 13:22:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:26:41.192 13:22:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:41.192 13:22:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:41.192 13:22:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:41.192 13:22:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:41.192 13:22:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:26:41.192 13:22:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=84420 00:26:41.192 13:22:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:26:41.192 13:22:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:26:41.192 13:22:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 84420 /var/tmp/spdk.tgt.sock 00:26:41.192 13:22:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 84420 ']' 00:26:41.192 13:22:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:26:41.192 13:22:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:41.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:26:41.192 13:22:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:26:41.192 13:22:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:41.192 13:22:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:41.192 [2024-07-25 13:22:33.183139] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:41.192 [2024-07-25 13:22:33.183296] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84420 ] 00:26:41.192 [2024-07-25 13:22:33.347560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.450 [2024-07-25 13:22:33.571496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:42.385 13:22:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:42.385 13:22:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:26:42.385 13:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:26:42.643 ftln1 00:26:42.643 13:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:26:42.643 13:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:26:42.901 13:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:26:42.901 13:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 84420 00:26:42.901 13:22:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 84420 ']' 00:26:42.901 13:22:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 84420 00:26:42.901 13:22:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:26:42.901 13:22:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:42.901 13:22:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84420 00:26:42.901 13:22:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:42.901 13:22:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:42.901 killing process with pid 84420 00:26:42.901 13:22:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84420' 00:26:42.901 13:22:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 84420 00:26:42.901 13:22:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 84420 00:26:44.801 13:22:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:26:44.801 13:22:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:45.059 [2024-07-25 13:22:37.019565] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:45.059 [2024-07-25 13:22:37.019728] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84475 ] 00:26:45.059 [2024-07-25 13:22:37.191615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.317 [2024-07-25 13:22:37.417976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:51.818  Copying: 205/1024 [MB] (205 MBps) Copying: 417/1024 [MB] (212 MBps) Copying: 631/1024 [MB] (214 MBps) Copying: 840/1024 [MB] (209 MBps) Copying: 1024/1024 [MB] (average 209 MBps) 00:26:51.818 00:26:51.818 Calculate MD5 checksum, iteration 1 00:26:51.818 13:22:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:26:51.818 13:22:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:26:51.818 13:22:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:51.818 13:22:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:51.818 13:22:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:51.818 13:22:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:51.818 13:22:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:51.818 13:22:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:51.818 [2024-07-25 13:22:43.974606] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:51.818 [2024-07-25 13:22:43.974751] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84546 ] 00:26:52.086 [2024-07-25 13:22:44.133983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.343 [2024-07-25 13:22:44.334374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.586  Copying: 502/1024 [MB] (502 MBps) Copying: 994/1024 [MB] (492 MBps) Copying: 1024/1024 [MB] (average 496 MBps) 00:26:55.586 00:26:55.586 13:22:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:26:55.586 13:22:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:58.133 13:22:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:58.133 Fill FTL, iteration 2 00:26:58.133 13:22:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=ae295715b9c27aa2f850574d2d7d3d1d 00:26:58.133 13:22:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:58.133 13:22:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:58.133 13:22:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:26:58.133 13:22:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:58.133 13:22:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:58.133 13:22:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:58.133 13:22:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:58.133 13:22:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:58.133 13:22:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:58.133 [2024-07-25 13:22:50.096770] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:58.133 [2024-07-25 13:22:50.096940] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84612 ] 00:26:58.133 [2024-07-25 13:22:50.273662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.391 [2024-07-25 13:22:50.512667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:04.871  Copying: 205/1024 [MB] (205 MBps) Copying: 412/1024 [MB] (207 MBps) Copying: 617/1024 [MB] (205 MBps) Copying: 833/1024 [MB] (216 MBps) Copying: 1024/1024 [MB] (average 208 MBps) 00:27:04.871 00:27:04.871 Calculate MD5 checksum, iteration 2 00:27:04.871 13:22:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:04.871 13:22:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:04.871 13:22:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:04.871 13:22:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:04.871 13:22:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:04.871 13:22:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:04.871 13:22:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:04.871 13:22:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:04.871 [2024-07-25 13:22:57.050660] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:04.872 [2024-07-25 13:22:57.050807] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84682 ] 00:27:05.129 [2024-07-25 13:22:57.217738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.388 [2024-07-25 13:22:57.403965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:09.521  Copying: 514/1024 [MB] (514 MBps) Copying: 985/1024 [MB] (471 MBps) Copying: 1024/1024 [MB] (average 489 MBps) 00:27:09.521 00:27:09.521 13:23:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:09.521 13:23:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:12.049 13:23:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:12.049 13:23:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=01b75fcb8f397cf237b4ea05b30f475a 00:27:12.049 13:23:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:12.049 13:23:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:12.049 13:23:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:12.049 [2024-07-25 13:23:03.966378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.049 [2024-07-25 13:23:03.966445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:12.049 [2024-07-25 13:23:03.966467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:27:12.049 [2024-07-25 13:23:03.966486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.049 [2024-07-25 13:23:03.966527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.049 [2024-07-25 13:23:03.966544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:12.049 [2024-07-25 13:23:03.966556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:12.049 [2024-07-25 13:23:03.966567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.049 [2024-07-25 13:23:03.966612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.049 [2024-07-25 13:23:03.966627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:12.049 [2024-07-25 13:23:03.966639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:12.049 [2024-07-25 13:23:03.966650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.049 [2024-07-25 13:23:03.966743] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.348 ms, result 0 00:27:12.049 true 00:27:12.049 13:23:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:12.307 { 00:27:12.307 "name": "ftl", 00:27:12.307 "properties": [ 00:27:12.307 { 00:27:12.307 "name": "superblock_version", 00:27:12.307 "value": 5, 00:27:12.307 "read-only": true 00:27:12.307 }, 00:27:12.307 { 00:27:12.307 "name": "base_device", 00:27:12.307 "bands": [ 00:27:12.307 { 00:27:12.307 "id": 0, 00:27:12.307 "state": "FREE", 00:27:12.307 "validity": 0.0 00:27:12.307 }, 00:27:12.307 { 00:27:12.307 "id": 1, 00:27:12.307 "state": "FREE", 00:27:12.307 "validity": 0.0 00:27:12.307 }, 00:27:12.307 { 00:27:12.307 "id": 2, 00:27:12.307 "state": "FREE", 00:27:12.307 "validity": 0.0 00:27:12.307 }, 00:27:12.307 { 00:27:12.307 "id": 3, 00:27:12.307 "state": "FREE", 00:27:12.307 "validity": 0.0 00:27:12.307 }, 00:27:12.307 { 00:27:12.307 "id": 4, 00:27:12.307 "state": "FREE", 00:27:12.307 "validity": 0.0 00:27:12.307 }, 00:27:12.307 { 00:27:12.307 "id": 5, 00:27:12.307 "state": "FREE", 00:27:12.307 "validity": 0.0 00:27:12.307 }, 00:27:12.307 { 00:27:12.307 "id": 6, 00:27:12.307 "state": "FREE", 00:27:12.307 "validity": 0.0 00:27:12.307 }, 00:27:12.307 { 00:27:12.307 "id": 7, 00:27:12.307 "state": "FREE", 00:27:12.307 "validity": 0.0 00:27:12.307 }, 00:27:12.307 { 00:27:12.307 "id": 8, 00:27:12.307 "state": "FREE", 00:27:12.307 "validity": 0.0 00:27:12.307 }, 00:27:12.307 { 00:27:12.307 "id": 9, 00:27:12.307 "state": "FREE", 00:27:12.307 "validity": 0.0 00:27:12.307 }, 00:27:12.307 { 00:27:12.307 "id": 10, 00:27:12.307 "state": "FREE", 00:27:12.307 "validity": 0.0 00:27:12.307 }, 00:27:12.307 { 00:27:12.307 "id": 11, 00:27:12.307 "state": "FREE", 00:27:12.307 "validity": 0.0 00:27:12.307 }, 00:27:12.307 { 00:27:12.307 "id": 12, 00:27:12.307 "state": "FREE", 00:27:12.307 "validity": 0.0 00:27:12.307 }, 00:27:12.307 { 00:27:12.307 "id": 13, 00:27:12.307 "state": "FREE", 00:27:12.307 "validity": 0.0 00:27:12.307 }, 00:27:12.307 { 00:27:12.307 "id": 14, 00:27:12.307 "state": "FREE", 00:27:12.307 "validity": 0.0 00:27:12.307 }, 00:27:12.307 { 00:27:12.307 "id": 15, 00:27:12.307 "state": "FREE", 00:27:12.307 "validity": 0.0 00:27:12.307 }, 00:27:12.307 { 00:27:12.307 "id": 16, 00:27:12.307 "state": "FREE", 00:27:12.307 "validity": 0.0 00:27:12.307 }, 00:27:12.307 { 00:27:12.307 "id": 17, 00:27:12.307 "state": "FREE", 00:27:12.307 "validity": 0.0 00:27:12.307 } 00:27:12.307 ], 00:27:12.307 "read-only": true 00:27:12.307 }, 00:27:12.307 { 00:27:12.307 "name": "cache_device", 00:27:12.307 "type": "bdev", 00:27:12.307 "chunks": [ 00:27:12.307 { 00:27:12.307 "id": 0, 00:27:12.307 "state": "INACTIVE", 00:27:12.307 "utilization": 0.0 00:27:12.307 }, 00:27:12.307 { 00:27:12.307 "id": 1, 00:27:12.307 "state": "CLOSED", 00:27:12.307 "utilization": 1.0 00:27:12.307 }, 00:27:12.307 { 00:27:12.307 "id": 2, 00:27:12.307 "state": "CLOSED", 00:27:12.307 "utilization": 1.0 00:27:12.307 }, 00:27:12.307 { 00:27:12.307 "id": 3, 00:27:12.307 "state": "OPEN", 00:27:12.307 "utilization": 0.001953125 00:27:12.307 }, 00:27:12.307 { 00:27:12.307 "id": 4, 00:27:12.307 "state": "OPEN", 00:27:12.307 "utilization": 0.0 00:27:12.307 } 00:27:12.307 ], 00:27:12.307 "read-only": true 00:27:12.307 }, 00:27:12.307 { 00:27:12.307 "name": "verbose_mode", 00:27:12.307 "value": true, 00:27:12.307 "unit": "", 00:27:12.307 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:12.307 }, 00:27:12.308 { 00:27:12.308 "name": "prep_upgrade_on_shutdown", 00:27:12.308 "value": false, 00:27:12.308 "unit": "", 00:27:12.308 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:12.308 } 00:27:12.308 ] 00:27:12.308 } 00:27:12.308 13:23:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:12.565 [2024-07-25 13:23:04.543063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.565 [2024-07-25 13:23:04.543146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:12.565 [2024-07-25 13:23:04.543167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:12.565 [2024-07-25 13:23:04.543179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.565 [2024-07-25 13:23:04.543219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.565 [2024-07-25 13:23:04.543235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:12.565 [2024-07-25 13:23:04.543247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:12.566 [2024-07-25 13:23:04.543258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.566 [2024-07-25 13:23:04.543286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.566 [2024-07-25 13:23:04.543299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:12.566 [2024-07-25 13:23:04.543311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:12.566 [2024-07-25 13:23:04.543322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.566 [2024-07-25 13:23:04.543450] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.358 ms, result 0 00:27:12.566 true 00:27:12.566 13:23:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:12.566 13:23:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:12.566 13:23:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:12.823 13:23:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:12.823 13:23:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:12.823 13:23:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:13.082 [2024-07-25 13:23:05.131689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.082 [2024-07-25 13:23:05.131771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:13.082 [2024-07-25 13:23:05.131792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:13.082 [2024-07-25 13:23:05.131803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.082 [2024-07-25 13:23:05.131840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.082 [2024-07-25 13:23:05.131856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:13.082 [2024-07-25 13:23:05.131868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:13.082 [2024-07-25 13:23:05.131879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.082 [2024-07-25 13:23:05.131906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.082 [2024-07-25 13:23:05.131920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:13.082 [2024-07-25 13:23:05.131931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:13.082 [2024-07-25 13:23:05.131942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.082 [2024-07-25 13:23:05.132016] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.316 ms, result 0 00:27:13.082 true 00:27:13.082 13:23:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:13.340 { 00:27:13.340 "name": "ftl", 00:27:13.340 "properties": [ 00:27:13.340 { 00:27:13.340 "name": "superblock_version", 00:27:13.340 "value": 5, 00:27:13.340 "read-only": true 00:27:13.340 }, 00:27:13.340 { 00:27:13.340 "name": "base_device", 00:27:13.340 "bands": [ 00:27:13.340 { 00:27:13.340 "id": 0, 00:27:13.340 "state": "FREE", 00:27:13.340 "validity": 0.0 00:27:13.340 }, 00:27:13.340 { 00:27:13.340 "id": 1, 00:27:13.340 "state": "FREE", 00:27:13.340 "validity": 0.0 00:27:13.340 }, 00:27:13.341 { 00:27:13.341 "id": 2, 00:27:13.341 "state": "FREE", 00:27:13.341 "validity": 0.0 00:27:13.341 }, 00:27:13.341 { 00:27:13.341 "id": 3, 00:27:13.341 "state": "FREE", 00:27:13.341 "validity": 0.0 00:27:13.341 }, 00:27:13.341 { 00:27:13.341 "id": 4, 00:27:13.341 "state": "FREE", 00:27:13.341 "validity": 0.0 00:27:13.341 }, 00:27:13.341 { 00:27:13.341 "id": 5, 00:27:13.341 "state": "FREE", 00:27:13.341 "validity": 0.0 00:27:13.341 }, 00:27:13.341 { 00:27:13.341 "id": 6, 00:27:13.341 "state": "FREE", 00:27:13.341 "validity": 0.0 00:27:13.341 }, 00:27:13.341 { 00:27:13.341 "id": 7, 00:27:13.341 "state": "FREE", 00:27:13.341 "validity": 0.0 00:27:13.341 }, 00:27:13.341 { 00:27:13.341 "id": 8, 00:27:13.341 "state": "FREE", 00:27:13.341 "validity": 0.0 00:27:13.341 }, 00:27:13.341 { 00:27:13.341 "id": 9, 00:27:13.341 "state": "FREE", 00:27:13.341 "validity": 0.0 00:27:13.341 }, 00:27:13.341 { 00:27:13.341 "id": 10, 00:27:13.341 "state": "FREE", 00:27:13.341 "validity": 0.0 00:27:13.341 }, 00:27:13.341 { 00:27:13.341 "id": 11, 00:27:13.341 "state": "FREE", 00:27:13.341 "validity": 0.0 00:27:13.341 }, 00:27:13.341 { 00:27:13.341 "id": 12, 00:27:13.341 "state": "FREE", 00:27:13.341 "validity": 0.0 00:27:13.341 }, 00:27:13.341 { 00:27:13.341 "id": 13, 00:27:13.341 "state": "FREE", 00:27:13.341 "validity": 0.0 00:27:13.341 }, 00:27:13.341 { 00:27:13.341 "id": 14, 00:27:13.341 "state": "FREE", 00:27:13.341 "validity": 0.0 00:27:13.341 }, 00:27:13.341 { 00:27:13.341 "id": 15, 00:27:13.341 "state": "FREE", 00:27:13.341 "validity": 0.0 00:27:13.341 }, 00:27:13.341 { 00:27:13.341 "id": 16, 00:27:13.341 "state": "FREE", 00:27:13.341 "validity": 0.0 00:27:13.341 }, 00:27:13.341 { 00:27:13.341 "id": 17, 00:27:13.341 "state": "FREE", 00:27:13.341 "validity": 0.0 00:27:13.341 } 00:27:13.341 ], 00:27:13.341 "read-only": true 00:27:13.341 }, 00:27:13.341 { 00:27:13.341 "name": "cache_device", 00:27:13.341 "type": "bdev", 00:27:13.341 "chunks": [ 00:27:13.341 { 00:27:13.341 "id": 0, 00:27:13.341 "state": "INACTIVE", 00:27:13.341 "utilization": 0.0 00:27:13.341 }, 00:27:13.341 { 00:27:13.341 "id": 1, 00:27:13.341 "state": "CLOSED", 00:27:13.341 "utilization": 1.0 00:27:13.341 }, 00:27:13.341 { 00:27:13.341 "id": 2, 00:27:13.341 "state": "CLOSED", 00:27:13.341 "utilization": 1.0 00:27:13.341 }, 00:27:13.341 { 00:27:13.341 "id": 3, 00:27:13.341 "state": "OPEN", 00:27:13.341 "utilization": 0.001953125 00:27:13.341 }, 00:27:13.341 { 00:27:13.341 "id": 4, 00:27:13.341 "state": "OPEN", 00:27:13.341 "utilization": 0.0 00:27:13.341 } 00:27:13.341 ], 00:27:13.341 "read-only": true 00:27:13.341 }, 00:27:13.341 { 00:27:13.341 "name": "verbose_mode", 00:27:13.341 "value": true, 00:27:13.341 "unit": "", 00:27:13.341 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:13.341 }, 00:27:13.341 { 00:27:13.341 "name": "prep_upgrade_on_shutdown", 00:27:13.341 "value": true, 00:27:13.341 "unit": "", 00:27:13.341 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:13.341 } 00:27:13.341 ] 00:27:13.341 } 00:27:13.341 13:23:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:13.341 13:23:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84303 ]] 00:27:13.341 13:23:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84303 00:27:13.341 13:23:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 84303 ']' 00:27:13.341 13:23:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 84303 00:27:13.341 13:23:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:27:13.341 13:23:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:13.341 13:23:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84303 00:27:13.341 killing process with pid 84303 00:27:13.341 13:23:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:13.341 13:23:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:13.341 13:23:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84303' 00:27:13.341 13:23:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 84303 00:27:13.341 13:23:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 84303 00:27:14.274 [2024-07-25 13:23:06.341987] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:14.274 [2024-07-25 13:23:06.359597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.274 [2024-07-25 13:23:06.359659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:14.274 [2024-07-25 13:23:06.359680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:14.274 [2024-07-25 13:23:06.359692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.274 [2024-07-25 13:23:06.359729] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:14.274 [2024-07-25 13:23:06.363093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.274 [2024-07-25 13:23:06.363140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:14.274 [2024-07-25 13:23:06.363163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.339 ms 00:27:14.274 [2024-07-25 13:23:06.363175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.243 [2024-07-25 13:23:14.798170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.243 [2024-07-25 13:23:14.798240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:24.243 [2024-07-25 13:23:14.798269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8435.015 ms 00:27:24.243 [2024-07-25 13:23:14.798282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.243 [2024-07-25 13:23:14.799481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.243 [2024-07-25 13:23:14.799529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:24.243 [2024-07-25 13:23:14.799547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.173 ms 00:27:24.243 [2024-07-25 13:23:14.799560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.243 [2024-07-25 13:23:14.800816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.243 [2024-07-25 13:23:14.800854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:24.243 [2024-07-25 13:23:14.800877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.212 ms 00:27:24.243 [2024-07-25 13:23:14.800889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.243 [2024-07-25 13:23:14.813322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.243 [2024-07-25 13:23:14.813368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:24.243 [2024-07-25 13:23:14.813385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.380 ms 00:27:24.243 [2024-07-25 13:23:14.813397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.243 [2024-07-25 13:23:14.821000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.243 [2024-07-25 13:23:14.821047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:24.243 [2024-07-25 13:23:14.821065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.558 ms 00:27:24.243 [2024-07-25 13:23:14.821076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.243 [2024-07-25 13:23:14.821213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.243 [2024-07-25 13:23:14.821242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:24.243 [2024-07-25 13:23:14.821255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.080 ms 00:27:24.243 [2024-07-25 13:23:14.821267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.243 [2024-07-25 13:23:14.833453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.243 [2024-07-25 13:23:14.833497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:27:24.243 [2024-07-25 13:23:14.833514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.162 ms 00:27:24.243 [2024-07-25 13:23:14.833525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.243 [2024-07-25 13:23:14.845765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.243 [2024-07-25 13:23:14.845806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:27:24.243 [2024-07-25 13:23:14.845822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.196 ms 00:27:24.243 [2024-07-25 13:23:14.845844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.243 [2024-07-25 13:23:14.858225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.243 [2024-07-25 13:23:14.858273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:24.243 [2024-07-25 13:23:14.858289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.338 ms 00:27:24.243 [2024-07-25 13:23:14.858300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.243 [2024-07-25 13:23:14.870408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.243 [2024-07-25 13:23:14.870452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:24.243 [2024-07-25 13:23:14.870468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.003 ms 00:27:24.243 [2024-07-25 13:23:14.870479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.243 [2024-07-25 13:23:14.870522] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:24.243 [2024-07-25 13:23:14.870547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:24.243 [2024-07-25 13:23:14.870562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:24.243 [2024-07-25 13:23:14.870574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:24.243 [2024-07-25 13:23:14.870586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:24.243 [2024-07-25 13:23:14.870597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:24.243 [2024-07-25 13:23:14.870609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:24.243 [2024-07-25 13:23:14.870620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:24.243 [2024-07-25 13:23:14.870632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:24.243 [2024-07-25 13:23:14.870646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:24.243 [2024-07-25 13:23:14.870666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:24.243 [2024-07-25 13:23:14.870688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:24.243 [2024-07-25 13:23:14.870701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:24.243 [2024-07-25 13:23:14.870713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:24.243 [2024-07-25 13:23:14.870742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:24.243 [2024-07-25 13:23:14.870754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:24.243 [2024-07-25 13:23:14.870765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:24.243 [2024-07-25 13:23:14.870777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:24.243 [2024-07-25 13:23:14.870788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:24.243 [2024-07-25 13:23:14.870803] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:24.243 [2024-07-25 13:23:14.870817] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: b0d8f7f3-bdb3-4318-bcb9-6fd0008eaba3 00:27:24.243 [2024-07-25 13:23:14.870838] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:24.243 [2024-07-25 13:23:14.870858] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:24.243 [2024-07-25 13:23:14.870876] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:24.244 [2024-07-25 13:23:14.870897] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:24.244 [2024-07-25 13:23:14.870918] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:24.244 [2024-07-25 13:23:14.870945] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:24.244 [2024-07-25 13:23:14.870956] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:24.244 [2024-07-25 13:23:14.870967] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:24.244 [2024-07-25 13:23:14.870977] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:24.244 [2024-07-25 13:23:14.870989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.244 [2024-07-25 13:23:14.871001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:24.244 [2024-07-25 13:23:14.871013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.470 ms 00:27:24.244 [2024-07-25 13:23:14.871024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.244 [2024-07-25 13:23:14.888087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.244 [2024-07-25 13:23:14.888154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:24.244 [2024-07-25 13:23:14.888181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.001 ms 00:27:24.244 [2024-07-25 13:23:14.888194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.244 [2024-07-25 13:23:14.888659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.244 [2024-07-25 13:23:14.888696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:24.244 [2024-07-25 13:23:14.888712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.430 ms 00:27:24.244 [2024-07-25 13:23:14.888723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.244 [2024-07-25 13:23:14.940658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:24.244 [2024-07-25 13:23:14.940734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:24.244 [2024-07-25 13:23:14.940753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:24.244 [2024-07-25 13:23:14.940765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.244 [2024-07-25 13:23:14.940826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:24.244 [2024-07-25 13:23:14.940841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:24.244 [2024-07-25 13:23:14.940853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:24.244 [2024-07-25 13:23:14.940864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.244 [2024-07-25 13:23:14.941021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:24.244 [2024-07-25 13:23:14.941044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:24.244 [2024-07-25 13:23:14.941065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:24.244 [2024-07-25 13:23:14.941076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.244 [2024-07-25 13:23:14.941128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:24.244 [2024-07-25 13:23:14.941156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:24.244 [2024-07-25 13:23:14.941171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:24.244 [2024-07-25 13:23:14.941182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.244 [2024-07-25 13:23:15.042416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:24.244 [2024-07-25 13:23:15.042491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:24.244 [2024-07-25 13:23:15.042511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:24.244 [2024-07-25 13:23:15.042523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.244 [2024-07-25 13:23:15.128754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:24.244 [2024-07-25 13:23:15.128825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:24.244 [2024-07-25 13:23:15.128846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:24.244 [2024-07-25 13:23:15.128858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.244 [2024-07-25 13:23:15.129024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:24.244 [2024-07-25 13:23:15.129056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:24.244 [2024-07-25 13:23:15.129077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:24.244 [2024-07-25 13:23:15.129097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.244 [2024-07-25 13:23:15.129206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:24.244 [2024-07-25 13:23:15.129237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:24.244 [2024-07-25 13:23:15.129252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:24.244 [2024-07-25 13:23:15.129263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.244 [2024-07-25 13:23:15.129423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:24.244 [2024-07-25 13:23:15.129454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:24.244 [2024-07-25 13:23:15.129469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:24.244 [2024-07-25 13:23:15.129480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.244 [2024-07-25 13:23:15.129554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:24.244 [2024-07-25 13:23:15.129583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:24.244 [2024-07-25 13:23:15.129608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:24.244 [2024-07-25 13:23:15.129620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.244 [2024-07-25 13:23:15.129669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:24.244 [2024-07-25 13:23:15.129691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:24.244 [2024-07-25 13:23:15.129713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:24.244 [2024-07-25 13:23:15.129736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.244 [2024-07-25 13:23:15.129800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:24.244 [2024-07-25 13:23:15.129827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:24.244 [2024-07-25 13:23:15.129841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:24.244 [2024-07-25 13:23:15.129860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.244 [2024-07-25 13:23:15.130048] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8770.474 ms, result 0 00:27:26.784 13:23:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:26.784 13:23:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:26.784 13:23:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:26.784 13:23:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:26.784 13:23:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:26.784 13:23:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84899 00:27:26.784 13:23:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:26.784 13:23:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84899 00:27:26.784 13:23:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:26.784 13:23:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 84899 ']' 00:27:26.784 13:23:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:26.784 13:23:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:26.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:26.784 13:23:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:26.784 13:23:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:26.784 13:23:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:26.784 [2024-07-25 13:23:18.716435] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:26.784 [2024-07-25 13:23:18.716625] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84899 ] 00:27:26.784 [2024-07-25 13:23:18.887331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.042 [2024-07-25 13:23:19.105304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.978 [2024-07-25 13:23:19.895286] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:27.978 [2024-07-25 13:23:19.895362] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:27.978 [2024-07-25 13:23:20.043616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.978 [2024-07-25 13:23:20.043682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:27.978 [2024-07-25 13:23:20.043704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:27.978 [2024-07-25 13:23:20.043716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.978 [2024-07-25 13:23:20.043791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.978 [2024-07-25 13:23:20.043811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:27.978 [2024-07-25 13:23:20.043824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:27:27.978 [2024-07-25 13:23:20.043835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.978 [2024-07-25 13:23:20.043875] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:27.978 [2024-07-25 13:23:20.044823] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:27.978 [2024-07-25 13:23:20.044858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.978 [2024-07-25 13:23:20.044873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:27.978 [2024-07-25 13:23:20.044886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.995 ms 00:27:27.978 [2024-07-25 13:23:20.044904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.978 [2024-07-25 13:23:20.046100] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:27.978 [2024-07-25 13:23:20.062347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.978 [2024-07-25 13:23:20.062414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:27.978 [2024-07-25 13:23:20.062435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.247 ms 00:27:27.978 [2024-07-25 13:23:20.062447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.978 [2024-07-25 13:23:20.062555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.978 [2024-07-25 13:23:20.062575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:27.978 [2024-07-25 13:23:20.062589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:27:27.978 [2024-07-25 13:23:20.062600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.978 [2024-07-25 13:23:20.067342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.978 [2024-07-25 13:23:20.067393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:27.978 [2024-07-25 13:23:20.067410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.600 ms 00:27:27.978 [2024-07-25 13:23:20.067421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.978 [2024-07-25 13:23:20.067536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.978 [2024-07-25 13:23:20.067558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:27.978 [2024-07-25 13:23:20.067576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:27:27.978 [2024-07-25 13:23:20.067587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.978 [2024-07-25 13:23:20.067674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.978 [2024-07-25 13:23:20.067700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:27.978 [2024-07-25 13:23:20.067715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:27:27.978 [2024-07-25 13:23:20.067727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.978 [2024-07-25 13:23:20.067766] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:27.978 [2024-07-25 13:23:20.072145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.978 [2024-07-25 13:23:20.072183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:27.978 [2024-07-25 13:23:20.072199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.389 ms 00:27:27.978 [2024-07-25 13:23:20.072211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.978 [2024-07-25 13:23:20.072251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.978 [2024-07-25 13:23:20.072268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:27.978 [2024-07-25 13:23:20.072285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:27.978 [2024-07-25 13:23:20.072297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.978 [2024-07-25 13:23:20.072350] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:27.978 [2024-07-25 13:23:20.072384] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:27.979 [2024-07-25 13:23:20.072428] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:27.979 [2024-07-25 13:23:20.072449] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:27:27.979 [2024-07-25 13:23:20.072557] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:27.979 [2024-07-25 13:23:20.072579] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:27.979 [2024-07-25 13:23:20.072594] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:27:27.979 [2024-07-25 13:23:20.072610] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:27.979 [2024-07-25 13:23:20.072623] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:27.979 [2024-07-25 13:23:20.072636] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:27.979 [2024-07-25 13:23:20.072647] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:27.979 [2024-07-25 13:23:20.072658] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:27.979 [2024-07-25 13:23:20.072670] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:27.979 [2024-07-25 13:23:20.072681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.979 [2024-07-25 13:23:20.072692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:27.979 [2024-07-25 13:23:20.072704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.336 ms 00:27:27.979 [2024-07-25 13:23:20.072719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.979 [2024-07-25 13:23:20.072818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.979 [2024-07-25 13:23:20.072839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:27.979 [2024-07-25 13:23:20.072851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:27:27.979 [2024-07-25 13:23:20.072862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.979 [2024-07-25 13:23:20.073025] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:27.979 [2024-07-25 13:23:20.073047] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:27.979 [2024-07-25 13:23:20.073060] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:27.979 [2024-07-25 13:23:20.073072] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:27.979 [2024-07-25 13:23:20.073090] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:27.979 [2024-07-25 13:23:20.073101] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:27.979 [2024-07-25 13:23:20.073130] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:27.979 [2024-07-25 13:23:20.073142] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:27.979 [2024-07-25 13:23:20.073154] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:27.979 [2024-07-25 13:23:20.073165] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:27.979 [2024-07-25 13:23:20.073176] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:27.979 [2024-07-25 13:23:20.073187] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:27.979 [2024-07-25 13:23:20.073197] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:27.979 [2024-07-25 13:23:20.073208] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:27.979 [2024-07-25 13:23:20.073219] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:27.979 [2024-07-25 13:23:20.073229] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:27.979 [2024-07-25 13:23:20.073242] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:27.979 [2024-07-25 13:23:20.073253] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:27.979 [2024-07-25 13:23:20.073264] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:27.979 [2024-07-25 13:23:20.073274] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:27.979 [2024-07-25 13:23:20.073286] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:27.979 [2024-07-25 13:23:20.073296] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:27.979 [2024-07-25 13:23:20.073307] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:27.979 [2024-07-25 13:23:20.073317] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:27.979 [2024-07-25 13:23:20.073328] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:27.979 [2024-07-25 13:23:20.073338] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:27.979 [2024-07-25 13:23:20.073349] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:27.979 [2024-07-25 13:23:20.073359] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:27.979 [2024-07-25 13:23:20.073370] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:27.979 [2024-07-25 13:23:20.073381] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:27.979 [2024-07-25 13:23:20.073391] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:27.979 [2024-07-25 13:23:20.073402] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:27.979 [2024-07-25 13:23:20.073412] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:27.979 [2024-07-25 13:23:20.073422] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:27.979 [2024-07-25 13:23:20.073433] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:27.979 [2024-07-25 13:23:20.073444] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:27.979 [2024-07-25 13:23:20.073454] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:27.979 [2024-07-25 13:23:20.073465] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:27.979 [2024-07-25 13:23:20.073475] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:27.979 [2024-07-25 13:23:20.073485] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:27.979 [2024-07-25 13:23:20.073502] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:27.979 [2024-07-25 13:23:20.073512] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:27.979 [2024-07-25 13:23:20.073523] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:27.979 [2024-07-25 13:23:20.073533] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:27.979 [2024-07-25 13:23:20.073545] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:27.979 [2024-07-25 13:23:20.073556] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:27.979 [2024-07-25 13:23:20.073567] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:27.979 [2024-07-25 13:23:20.073578] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:27.979 [2024-07-25 13:23:20.073590] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:27.979 [2024-07-25 13:23:20.073601] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:27.979 [2024-07-25 13:23:20.073612] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:27.979 [2024-07-25 13:23:20.073637] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:27.979 [2024-07-25 13:23:20.073649] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:27.979 [2024-07-25 13:23:20.073661] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:27.979 [2024-07-25 13:23:20.073675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:27.979 [2024-07-25 13:23:20.073688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:27.979 [2024-07-25 13:23:20.073701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:27.979 [2024-07-25 13:23:20.073712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:27.979 [2024-07-25 13:23:20.073724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:27.979 [2024-07-25 13:23:20.073735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:27.979 [2024-07-25 13:23:20.073747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:27.979 [2024-07-25 13:23:20.073758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:27.979 [2024-07-25 13:23:20.073770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:27.979 [2024-07-25 13:23:20.073782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:27.979 [2024-07-25 13:23:20.073793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:27.979 [2024-07-25 13:23:20.073805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:27.979 [2024-07-25 13:23:20.073816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:27.979 [2024-07-25 13:23:20.073828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:27.979 [2024-07-25 13:23:20.073840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:27.979 [2024-07-25 13:23:20.073852] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:27.979 [2024-07-25 13:23:20.073865] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:27.979 [2024-07-25 13:23:20.073877] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:27.979 [2024-07-25 13:23:20.073891] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:27.979 [2024-07-25 13:23:20.073904] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:27.979 [2024-07-25 13:23:20.073917] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:27.979 [2024-07-25 13:23:20.073930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.979 [2024-07-25 13:23:20.073942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:27.980 [2024-07-25 13:23:20.073954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.983 ms 00:27:27.980 [2024-07-25 13:23:20.073970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.980 [2024-07-25 13:23:20.074038] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:27.980 [2024-07-25 13:23:20.074057] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:29.875 [2024-07-25 13:23:22.012137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.875 [2024-07-25 13:23:22.012208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:29.875 [2024-07-25 13:23:22.012230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1938.111 ms 00:27:29.875 [2024-07-25 13:23:22.012255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.876 [2024-07-25 13:23:22.044739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.876 [2024-07-25 13:23:22.044800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:29.876 [2024-07-25 13:23:22.044821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.221 ms 00:27:29.876 [2024-07-25 13:23:22.044834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.876 [2024-07-25 13:23:22.044991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.876 [2024-07-25 13:23:22.045013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:29.876 [2024-07-25 13:23:22.045027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:29.876 [2024-07-25 13:23:22.045038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.151 [2024-07-25 13:23:22.084049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.151 [2024-07-25 13:23:22.084122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:30.151 [2024-07-25 13:23:22.084144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.947 ms 00:27:30.151 [2024-07-25 13:23:22.084156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.151 [2024-07-25 13:23:22.084233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.151 [2024-07-25 13:23:22.084249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:30.151 [2024-07-25 13:23:22.084263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:30.151 [2024-07-25 13:23:22.084274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.151 [2024-07-25 13:23:22.084694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.151 [2024-07-25 13:23:22.084715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:30.151 [2024-07-25 13:23:22.084730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.305 ms 00:27:30.151 [2024-07-25 13:23:22.084741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.151 [2024-07-25 13:23:22.084800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.151 [2024-07-25 13:23:22.084817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:30.151 [2024-07-25 13:23:22.084830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:27:30.151 [2024-07-25 13:23:22.084841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.151 [2024-07-25 13:23:22.102418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.151 [2024-07-25 13:23:22.102476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:30.151 [2024-07-25 13:23:22.102495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.545 ms 00:27:30.151 [2024-07-25 13:23:22.102507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.151 [2024-07-25 13:23:22.118948] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:30.151 [2024-07-25 13:23:22.118995] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:30.151 [2024-07-25 13:23:22.119021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.151 [2024-07-25 13:23:22.119035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:30.151 [2024-07-25 13:23:22.119050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.333 ms 00:27:30.151 [2024-07-25 13:23:22.119061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.151 [2024-07-25 13:23:22.137197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.151 [2024-07-25 13:23:22.137247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:30.151 [2024-07-25 13:23:22.137266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.062 ms 00:27:30.151 [2024-07-25 13:23:22.137286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.151 [2024-07-25 13:23:22.152805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.151 [2024-07-25 13:23:22.152849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:30.151 [2024-07-25 13:23:22.152868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.442 ms 00:27:30.151 [2024-07-25 13:23:22.152889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.151 [2024-07-25 13:23:22.168384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.151 [2024-07-25 13:23:22.168433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:30.151 [2024-07-25 13:23:22.168450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.440 ms 00:27:30.151 [2024-07-25 13:23:22.168462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.151 [2024-07-25 13:23:22.169322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.151 [2024-07-25 13:23:22.169360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:30.151 [2024-07-25 13:23:22.169376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.712 ms 00:27:30.151 [2024-07-25 13:23:22.169388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.151 [2024-07-25 13:23:22.249278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.151 [2024-07-25 13:23:22.249352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:30.151 [2024-07-25 13:23:22.249374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 79.858 ms 00:27:30.151 [2024-07-25 13:23:22.249386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.151 [2024-07-25 13:23:22.262133] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:30.151 [2024-07-25 13:23:22.262940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.151 [2024-07-25 13:23:22.262978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:30.151 [2024-07-25 13:23:22.262996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.460 ms 00:27:30.151 [2024-07-25 13:23:22.263009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.151 [2024-07-25 13:23:22.263163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.151 [2024-07-25 13:23:22.263186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:30.151 [2024-07-25 13:23:22.263200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:30.151 [2024-07-25 13:23:22.263212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.151 [2024-07-25 13:23:22.263298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.151 [2024-07-25 13:23:22.263324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:30.151 [2024-07-25 13:23:22.263344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:30.151 [2024-07-25 13:23:22.263356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.151 [2024-07-25 13:23:22.263393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.151 [2024-07-25 13:23:22.263409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:30.151 [2024-07-25 13:23:22.263422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:30.151 [2024-07-25 13:23:22.263433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.151 [2024-07-25 13:23:22.263476] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:30.151 [2024-07-25 13:23:22.263494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.151 [2024-07-25 13:23:22.263506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:30.151 [2024-07-25 13:23:22.263523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:27:30.151 [2024-07-25 13:23:22.263538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.151 [2024-07-25 13:23:22.294662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.152 [2024-07-25 13:23:22.294711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:30.152 [2024-07-25 13:23:22.294731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.095 ms 00:27:30.152 [2024-07-25 13:23:22.294743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.152 [2024-07-25 13:23:22.294843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.152 [2024-07-25 13:23:22.294865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:30.152 [2024-07-25 13:23:22.294886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:27:30.152 [2024-07-25 13:23:22.294898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.152 [2024-07-25 13:23:22.296099] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2251.992 ms, result 0 00:27:30.152 [2024-07-25 13:23:22.311135] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:30.152 [2024-07-25 13:23:22.327144] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:30.416 [2024-07-25 13:23:22.336161] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:30.416 13:23:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:30.416 13:23:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:30.416 13:23:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:30.417 13:23:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:30.417 13:23:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:30.674 [2024-07-25 13:23:22.628354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.674 [2024-07-25 13:23:22.628424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:30.674 [2024-07-25 13:23:22.628447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:30.674 [2024-07-25 13:23:22.628460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.674 [2024-07-25 13:23:22.628497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.674 [2024-07-25 13:23:22.628514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:30.674 [2024-07-25 13:23:22.628527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:30.674 [2024-07-25 13:23:22.628538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.674 [2024-07-25 13:23:22.628566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.675 [2024-07-25 13:23:22.628581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:30.675 [2024-07-25 13:23:22.628600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:30.675 [2024-07-25 13:23:22.628611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.675 [2024-07-25 13:23:22.628687] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.324 ms, result 0 00:27:30.675 true 00:27:30.675 13:23:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:30.932 { 00:27:30.932 "name": "ftl", 00:27:30.932 "properties": [ 00:27:30.932 { 00:27:30.932 "name": "superblock_version", 00:27:30.932 "value": 5, 00:27:30.932 "read-only": true 00:27:30.932 }, 00:27:30.932 { 00:27:30.932 "name": "base_device", 00:27:30.932 "bands": [ 00:27:30.932 { 00:27:30.932 "id": 0, 00:27:30.932 "state": "CLOSED", 00:27:30.932 "validity": 1.0 00:27:30.932 }, 00:27:30.932 { 00:27:30.932 "id": 1, 00:27:30.932 "state": "CLOSED", 00:27:30.932 "validity": 1.0 00:27:30.932 }, 00:27:30.932 { 00:27:30.932 "id": 2, 00:27:30.932 "state": "CLOSED", 00:27:30.932 "validity": 0.007843137254901933 00:27:30.932 }, 00:27:30.932 { 00:27:30.932 "id": 3, 00:27:30.932 "state": "FREE", 00:27:30.932 "validity": 0.0 00:27:30.932 }, 00:27:30.932 { 00:27:30.932 "id": 4, 00:27:30.932 "state": "FREE", 00:27:30.932 "validity": 0.0 00:27:30.932 }, 00:27:30.932 { 00:27:30.932 "id": 5, 00:27:30.932 "state": "FREE", 00:27:30.932 "validity": 0.0 00:27:30.932 }, 00:27:30.932 { 00:27:30.932 "id": 6, 00:27:30.932 "state": "FREE", 00:27:30.932 "validity": 0.0 00:27:30.932 }, 00:27:30.932 { 00:27:30.932 "id": 7, 00:27:30.932 "state": "FREE", 00:27:30.933 "validity": 0.0 00:27:30.933 }, 00:27:30.933 { 00:27:30.933 "id": 8, 00:27:30.933 "state": "FREE", 00:27:30.933 "validity": 0.0 00:27:30.933 }, 00:27:30.933 { 00:27:30.933 "id": 9, 00:27:30.933 "state": "FREE", 00:27:30.933 "validity": 0.0 00:27:30.933 }, 00:27:30.933 { 00:27:30.933 "id": 10, 00:27:30.933 "state": "FREE", 00:27:30.933 "validity": 0.0 00:27:30.933 }, 00:27:30.933 { 00:27:30.933 "id": 11, 00:27:30.933 "state": "FREE", 00:27:30.933 "validity": 0.0 00:27:30.933 }, 00:27:30.933 { 00:27:30.933 "id": 12, 00:27:30.933 "state": "FREE", 00:27:30.933 "validity": 0.0 00:27:30.933 }, 00:27:30.933 { 00:27:30.933 "id": 13, 00:27:30.933 "state": "FREE", 00:27:30.933 "validity": 0.0 00:27:30.933 }, 00:27:30.933 { 00:27:30.933 "id": 14, 00:27:30.933 "state": "FREE", 00:27:30.933 "validity": 0.0 00:27:30.933 }, 00:27:30.933 { 00:27:30.933 "id": 15, 00:27:30.933 "state": "FREE", 00:27:30.933 "validity": 0.0 00:27:30.933 }, 00:27:30.933 { 00:27:30.933 "id": 16, 00:27:30.933 "state": "FREE", 00:27:30.933 "validity": 0.0 00:27:30.933 }, 00:27:30.933 { 00:27:30.933 "id": 17, 00:27:30.933 "state": "FREE", 00:27:30.933 "validity": 0.0 00:27:30.933 } 00:27:30.933 ], 00:27:30.933 "read-only": true 00:27:30.933 }, 00:27:30.933 { 00:27:30.933 "name": "cache_device", 00:27:30.933 "type": "bdev", 00:27:30.933 "chunks": [ 00:27:30.933 { 00:27:30.933 "id": 0, 00:27:30.933 "state": "INACTIVE", 00:27:30.933 "utilization": 0.0 00:27:30.933 }, 00:27:30.933 { 00:27:30.933 "id": 1, 00:27:30.933 "state": "OPEN", 00:27:30.933 "utilization": 0.0 00:27:30.933 }, 00:27:30.933 { 00:27:30.933 "id": 2, 00:27:30.933 "state": "OPEN", 00:27:30.933 "utilization": 0.0 00:27:30.933 }, 00:27:30.933 { 00:27:30.933 "id": 3, 00:27:30.933 "state": "FREE", 00:27:30.933 "utilization": 0.0 00:27:30.933 }, 00:27:30.933 { 00:27:30.933 "id": 4, 00:27:30.933 "state": "FREE", 00:27:30.933 "utilization": 0.0 00:27:30.933 } 00:27:30.933 ], 00:27:30.933 "read-only": true 00:27:30.933 }, 00:27:30.933 { 00:27:30.933 "name": "verbose_mode", 00:27:30.933 "value": true, 00:27:30.933 "unit": "", 00:27:30.933 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:30.933 }, 00:27:30.933 { 00:27:30.933 "name": "prep_upgrade_on_shutdown", 00:27:30.933 "value": false, 00:27:30.933 "unit": "", 00:27:30.933 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:30.933 } 00:27:30.933 ] 00:27:30.933 } 00:27:30.933 13:23:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:30.933 13:23:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:30.933 13:23:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:31.191 13:23:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:31.191 13:23:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:31.191 13:23:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:31.191 13:23:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:31.191 13:23:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:31.460 Validate MD5 checksum, iteration 1 00:27:31.460 13:23:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:31.460 13:23:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:31.460 13:23:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:31.460 13:23:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:31.460 13:23:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:31.460 13:23:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:31.460 13:23:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:31.460 13:23:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:31.460 13:23:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:31.460 13:23:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:31.460 13:23:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:31.460 13:23:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:31.460 13:23:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:31.460 [2024-07-25 13:23:23.609688] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:31.460 [2024-07-25 13:23:23.609858] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84966 ] 00:27:31.729 [2024-07-25 13:23:23.799161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.986 [2024-07-25 13:23:23.996396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:36.616  Copying: 489/1024 [MB] (489 MBps) Copying: 921/1024 [MB] (432 MBps) Copying: 1024/1024 [MB] (average 455 MBps) 00:27:36.616 00:27:36.616 13:23:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:36.616 13:23:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:38.521 13:23:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:38.521 Validate MD5 checksum, iteration 2 00:27:38.521 13:23:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ae295715b9c27aa2f850574d2d7d3d1d 00:27:38.521 13:23:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ae295715b9c27aa2f850574d2d7d3d1d != \a\e\2\9\5\7\1\5\b\9\c\2\7\a\a\2\f\8\5\0\5\7\4\d\2\d\7\d\3\d\1\d ]] 00:27:38.521 13:23:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:38.521 13:23:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:38.521 13:23:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:38.521 13:23:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:38.521 13:23:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:38.521 13:23:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:38.521 13:23:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:38.521 13:23:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:38.521 13:23:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:38.521 [2024-07-25 13:23:30.688878] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:38.521 [2024-07-25 13:23:30.689066] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85039 ] 00:27:38.779 [2024-07-25 13:23:30.874959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:39.037 [2024-07-25 13:23:31.076068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:44.287  Copying: 448/1024 [MB] (448 MBps) Copying: 892/1024 [MB] (444 MBps) Copying: 1024/1024 [MB] (average 452 MBps) 00:27:44.287 00:27:44.287 13:23:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:44.287 13:23:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:46.817 13:23:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:46.817 13:23:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=01b75fcb8f397cf237b4ea05b30f475a 00:27:46.817 13:23:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 01b75fcb8f397cf237b4ea05b30f475a != \0\1\b\7\5\f\c\b\8\f\3\9\7\c\f\2\3\7\b\4\e\a\0\5\b\3\0\f\4\7\5\a ]] 00:27:46.817 13:23:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:46.817 13:23:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:46.817 13:23:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:27:46.817 13:23:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84899 ]] 00:27:46.817 13:23:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84899 00:27:46.817 13:23:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:27:46.817 13:23:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:27:46.817 13:23:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:46.817 13:23:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:46.817 13:23:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:46.817 13:23:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85118 00:27:46.817 13:23:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:46.817 13:23:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85118 00:27:46.817 13:23:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 85118 ']' 00:27:46.817 13:23:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:46.817 13:23:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:46.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:46.817 13:23:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:46.817 13:23:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:46.817 13:23:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:46.817 13:23:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:46.817 [2024-07-25 13:23:38.649536] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:46.817 [2024-07-25 13:23:38.650537] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85118 ] 00:27:46.817 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 84899 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:27:46.817 [2024-07-25 13:23:38.825137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.074 [2024-07-25 13:23:39.012555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.640 [2024-07-25 13:23:39.804181] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:47.640 [2024-07-25 13:23:39.804258] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:47.899 [2024-07-25 13:23:39.952234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.899 [2024-07-25 13:23:39.952311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:47.899 [2024-07-25 13:23:39.952333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:47.899 [2024-07-25 13:23:39.952345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.899 [2024-07-25 13:23:39.952423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.899 [2024-07-25 13:23:39.952443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:47.899 [2024-07-25 13:23:39.952456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:27:47.899 [2024-07-25 13:23:39.952467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.899 [2024-07-25 13:23:39.952505] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:47.899 [2024-07-25 13:23:39.953454] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:47.899 [2024-07-25 13:23:39.953493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.899 [2024-07-25 13:23:39.953508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:47.899 [2024-07-25 13:23:39.953520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.999 ms 00:27:47.899 [2024-07-25 13:23:39.953537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.899 [2024-07-25 13:23:39.954010] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:47.899 [2024-07-25 13:23:39.974241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.899 [2024-07-25 13:23:39.974295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:47.899 [2024-07-25 13:23:39.974322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.232 ms 00:27:47.899 [2024-07-25 13:23:39.974335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.899 [2024-07-25 13:23:39.986417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.899 [2024-07-25 13:23:39.986482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:47.899 [2024-07-25 13:23:39.986501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:27:47.899 [2024-07-25 13:23:39.986514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.899 [2024-07-25 13:23:39.987086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.899 [2024-07-25 13:23:39.987136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:47.899 [2024-07-25 13:23:39.987152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.449 ms 00:27:47.899 [2024-07-25 13:23:39.987164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.899 [2024-07-25 13:23:39.987236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.899 [2024-07-25 13:23:39.987256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:47.899 [2024-07-25 13:23:39.987269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:27:47.899 [2024-07-25 13:23:39.987279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.899 [2024-07-25 13:23:39.987323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.899 [2024-07-25 13:23:39.987338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:47.899 [2024-07-25 13:23:39.987354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:47.899 [2024-07-25 13:23:39.987366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.899 [2024-07-25 13:23:39.987401] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:47.899 [2024-07-25 13:23:39.991304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.899 [2024-07-25 13:23:39.991344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:47.899 [2024-07-25 13:23:39.991360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.913 ms 00:27:47.899 [2024-07-25 13:23:39.991372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.899 [2024-07-25 13:23:39.991411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.899 [2024-07-25 13:23:39.991427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:47.899 [2024-07-25 13:23:39.991439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:47.899 [2024-07-25 13:23:39.991450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.899 [2024-07-25 13:23:39.991502] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:47.899 [2024-07-25 13:23:39.991532] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:47.899 [2024-07-25 13:23:39.991578] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:47.899 [2024-07-25 13:23:39.991598] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:27:47.899 [2024-07-25 13:23:39.991704] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:47.899 [2024-07-25 13:23:39.991720] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:47.899 [2024-07-25 13:23:39.991735] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:27:47.899 [2024-07-25 13:23:39.991749] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:47.899 [2024-07-25 13:23:39.991762] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:47.899 [2024-07-25 13:23:39.991779] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:47.899 [2024-07-25 13:23:39.991790] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:47.899 [2024-07-25 13:23:39.991801] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:47.899 [2024-07-25 13:23:39.991812] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:47.899 [2024-07-25 13:23:39.991827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.899 [2024-07-25 13:23:39.991838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:47.899 [2024-07-25 13:23:39.991850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.328 ms 00:27:47.899 [2024-07-25 13:23:39.991861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.899 [2024-07-25 13:23:39.991952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.899 [2024-07-25 13:23:39.991966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:47.899 [2024-07-25 13:23:39.991982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:27:47.899 [2024-07-25 13:23:39.991993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.899 [2024-07-25 13:23:39.992124] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:47.899 [2024-07-25 13:23:39.992144] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:47.899 [2024-07-25 13:23:39.992156] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:47.899 [2024-07-25 13:23:39.992167] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.899 [2024-07-25 13:23:39.992178] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:47.899 [2024-07-25 13:23:39.992188] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:47.899 [2024-07-25 13:23:39.992200] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:47.899 [2024-07-25 13:23:39.992210] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:47.899 [2024-07-25 13:23:39.992220] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:47.899 [2024-07-25 13:23:39.992230] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.899 [2024-07-25 13:23:39.992240] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:47.899 [2024-07-25 13:23:39.992249] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:47.899 [2024-07-25 13:23:39.992259] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.899 [2024-07-25 13:23:39.992269] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:47.899 [2024-07-25 13:23:39.992280] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:47.900 [2024-07-25 13:23:39.992290] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.900 [2024-07-25 13:23:39.992300] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:47.900 [2024-07-25 13:23:39.992310] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:47.900 [2024-07-25 13:23:39.992320] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.900 [2024-07-25 13:23:39.992330] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:47.900 [2024-07-25 13:23:39.992340] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:47.900 [2024-07-25 13:23:39.992350] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:47.900 [2024-07-25 13:23:39.992360] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:47.900 [2024-07-25 13:23:39.992370] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:47.900 [2024-07-25 13:23:39.992380] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:47.900 [2024-07-25 13:23:39.992390] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:47.900 [2024-07-25 13:23:39.992400] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:47.900 [2024-07-25 13:23:39.992410] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:47.900 [2024-07-25 13:23:39.992419] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:47.900 [2024-07-25 13:23:39.992430] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:47.900 [2024-07-25 13:23:39.992440] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:47.900 [2024-07-25 13:23:39.992450] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:47.900 [2024-07-25 13:23:39.992460] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:47.900 [2024-07-25 13:23:39.992470] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.900 [2024-07-25 13:23:39.992480] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:47.900 [2024-07-25 13:23:39.992490] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:47.900 [2024-07-25 13:23:39.992500] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.900 [2024-07-25 13:23:39.992510] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:47.900 [2024-07-25 13:23:39.992521] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:47.900 [2024-07-25 13:23:39.992531] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.900 [2024-07-25 13:23:39.992541] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:47.900 [2024-07-25 13:23:39.992551] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:47.900 [2024-07-25 13:23:39.992561] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.900 [2024-07-25 13:23:39.992570] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:47.900 [2024-07-25 13:23:39.992582] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:47.900 [2024-07-25 13:23:39.992592] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:47.900 [2024-07-25 13:23:39.992603] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.900 [2024-07-25 13:23:39.992620] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:47.900 [2024-07-25 13:23:39.992630] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:47.900 [2024-07-25 13:23:39.992653] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:47.900 [2024-07-25 13:23:39.992665] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:47.900 [2024-07-25 13:23:39.992675] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:47.900 [2024-07-25 13:23:39.992685] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:47.900 [2024-07-25 13:23:39.992697] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:47.900 [2024-07-25 13:23:39.992710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:47.900 [2024-07-25 13:23:39.992723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:47.900 [2024-07-25 13:23:39.992734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:47.900 [2024-07-25 13:23:39.992745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:47.900 [2024-07-25 13:23:39.992756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:47.900 [2024-07-25 13:23:39.992768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:47.900 [2024-07-25 13:23:39.992779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:47.900 [2024-07-25 13:23:39.992790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:47.900 [2024-07-25 13:23:39.992801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:47.900 [2024-07-25 13:23:39.992812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:47.900 [2024-07-25 13:23:39.992823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:47.900 [2024-07-25 13:23:39.992834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:47.900 [2024-07-25 13:23:39.992845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:47.900 [2024-07-25 13:23:39.992857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:47.900 [2024-07-25 13:23:39.992869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:47.900 [2024-07-25 13:23:39.992880] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:47.900 [2024-07-25 13:23:39.992896] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:47.900 [2024-07-25 13:23:39.992908] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:47.900 [2024-07-25 13:23:39.992919] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:47.900 [2024-07-25 13:23:39.992931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:47.900 [2024-07-25 13:23:39.992942] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:47.900 [2024-07-25 13:23:39.992954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.900 [2024-07-25 13:23:39.992976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:47.900 [2024-07-25 13:23:39.992989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.914 ms 00:27:47.900 [2024-07-25 13:23:39.993000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.900 [2024-07-25 13:23:40.024150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.900 [2024-07-25 13:23:40.024212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:47.900 [2024-07-25 13:23:40.024233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.074 ms 00:27:47.900 [2024-07-25 13:23:40.024246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.900 [2024-07-25 13:23:40.024324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.900 [2024-07-25 13:23:40.024340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:47.900 [2024-07-25 13:23:40.024359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:27:47.900 [2024-07-25 13:23:40.024370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.900 [2024-07-25 13:23:40.067542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.900 [2024-07-25 13:23:40.067615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:47.900 [2024-07-25 13:23:40.067646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.072 ms 00:27:47.900 [2024-07-25 13:23:40.067669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.900 [2024-07-25 13:23:40.067778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.900 [2024-07-25 13:23:40.067797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:47.900 [2024-07-25 13:23:40.067812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:47.900 [2024-07-25 13:23:40.067823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.900 [2024-07-25 13:23:40.068029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.900 [2024-07-25 13:23:40.068072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:47.900 [2024-07-25 13:23:40.068094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.088 ms 00:27:47.900 [2024-07-25 13:23:40.068133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.900 [2024-07-25 13:23:40.068202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.900 [2024-07-25 13:23:40.068218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:47.900 [2024-07-25 13:23:40.068230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:27:47.900 [2024-07-25 13:23:40.068241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.159 [2024-07-25 13:23:40.087981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.159 [2024-07-25 13:23:40.088055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:48.159 [2024-07-25 13:23:40.088115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.708 ms 00:27:48.159 [2024-07-25 13:23:40.088132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.159 [2024-07-25 13:23:40.088332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.159 [2024-07-25 13:23:40.088365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:27:48.159 [2024-07-25 13:23:40.088381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:27:48.159 [2024-07-25 13:23:40.088398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.159 [2024-07-25 13:23:40.117623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.159 [2024-07-25 13:23:40.117707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:27:48.159 [2024-07-25 13:23:40.117741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.193 ms 00:27:48.159 [2024-07-25 13:23:40.117774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.159 [2024-07-25 13:23:40.132608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.159 [2024-07-25 13:23:40.132661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:48.159 [2024-07-25 13:23:40.132681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.693 ms 00:27:48.159 [2024-07-25 13:23:40.132693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.159 [2024-07-25 13:23:40.205914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.159 [2024-07-25 13:23:40.205994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:48.159 [2024-07-25 13:23:40.206017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 73.117 ms 00:27:48.159 [2024-07-25 13:23:40.206029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.159 [2024-07-25 13:23:40.206282] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:27:48.159 [2024-07-25 13:23:40.206433] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:27:48.159 [2024-07-25 13:23:40.206589] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:27:48.159 [2024-07-25 13:23:40.206731] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:27:48.159 [2024-07-25 13:23:40.206753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.159 [2024-07-25 13:23:40.206766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:27:48.159 [2024-07-25 13:23:40.206786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.612 ms 00:27:48.159 [2024-07-25 13:23:40.206797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.159 [2024-07-25 13:23:40.206915] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:27:48.159 [2024-07-25 13:23:40.206936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.159 [2024-07-25 13:23:40.206948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:27:48.159 [2024-07-25 13:23:40.206961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:48.159 [2024-07-25 13:23:40.206972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.159 [2024-07-25 13:23:40.227327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.159 [2024-07-25 13:23:40.227383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:27:48.159 [2024-07-25 13:23:40.227402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.312 ms 00:27:48.159 [2024-07-25 13:23:40.227415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.159 [2024-07-25 13:23:40.239355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.159 [2024-07-25 13:23:40.239401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:27:48.159 [2024-07-25 13:23:40.239423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:27:48.159 [2024-07-25 13:23:40.239435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.159 [2024-07-25 13:23:40.239664] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:27:48.724 [2024-07-25 13:23:40.759347] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:27:48.724 [2024-07-25 13:23:40.759591] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:27:49.290 [2024-07-25 13:23:41.282816] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:27:49.291 [2024-07-25 13:23:41.282954] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:49.291 [2024-07-25 13:23:41.282978] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:49.291 [2024-07-25 13:23:41.282996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.291 [2024-07-25 13:23:41.283010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:27:49.291 [2024-07-25 13:23:41.283027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1043.461 ms 00:27:49.291 [2024-07-25 13:23:41.283039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.291 [2024-07-25 13:23:41.283090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.291 [2024-07-25 13:23:41.283124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:27:49.291 [2024-07-25 13:23:41.283140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:49.291 [2024-07-25 13:23:41.283161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.291 [2024-07-25 13:23:41.295919] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:49.291 [2024-07-25 13:23:41.296083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.291 [2024-07-25 13:23:41.296116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:49.291 [2024-07-25 13:23:41.296133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.897 ms 00:27:49.291 [2024-07-25 13:23:41.296145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.291 [2024-07-25 13:23:41.296924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.291 [2024-07-25 13:23:41.296973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:27:49.291 [2024-07-25 13:23:41.296990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.647 ms 00:27:49.291 [2024-07-25 13:23:41.297008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.291 [2024-07-25 13:23:41.299547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.291 [2024-07-25 13:23:41.299581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:27:49.291 [2024-07-25 13:23:41.299595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.510 ms 00:27:49.291 [2024-07-25 13:23:41.299607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.291 [2024-07-25 13:23:41.299659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.291 [2024-07-25 13:23:41.299676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:27:49.291 [2024-07-25 13:23:41.299688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:49.291 [2024-07-25 13:23:41.299700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.291 [2024-07-25 13:23:41.299847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.291 [2024-07-25 13:23:41.299864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:49.291 [2024-07-25 13:23:41.299876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:27:49.291 [2024-07-25 13:23:41.299887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.291 [2024-07-25 13:23:41.299917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.291 [2024-07-25 13:23:41.299931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:49.291 [2024-07-25 13:23:41.299943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:49.291 [2024-07-25 13:23:41.299954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.291 [2024-07-25 13:23:41.299996] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:49.291 [2024-07-25 13:23:41.300020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.291 [2024-07-25 13:23:41.300037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:49.291 [2024-07-25 13:23:41.300049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:27:49.291 [2024-07-25 13:23:41.300059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.291 [2024-07-25 13:23:41.300138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.291 [2024-07-25 13:23:41.300156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:49.291 [2024-07-25 13:23:41.300168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:27:49.291 [2024-07-25 13:23:41.300179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.291 [2024-07-25 13:23:41.301407] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1348.627 ms, result 0 00:27:49.291 [2024-07-25 13:23:41.316790] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:49.291 [2024-07-25 13:23:41.332805] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:49.291 [2024-07-25 13:23:41.341881] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:49.291 Validate MD5 checksum, iteration 1 00:27:49.291 13:23:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:49.291 13:23:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:49.291 13:23:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:49.291 13:23:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:49.291 13:23:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:27:49.291 13:23:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:49.291 13:23:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:49.291 13:23:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:49.291 13:23:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:49.291 13:23:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:49.291 13:23:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:49.291 13:23:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:49.291 13:23:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:49.291 13:23:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:49.291 13:23:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:49.291 [2024-07-25 13:23:41.458913] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:49.291 [2024-07-25 13:23:41.459067] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85154 ] 00:27:49.565 [2024-07-25 13:23:41.617220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.825 [2024-07-25 13:23:41.806439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:54.447  Copying: 485/1024 [MB] (485 MBps) Copying: 910/1024 [MB] (425 MBps) Copying: 1024/1024 [MB] (average 458 MBps) 00:27:54.447 00:27:54.447 13:23:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:54.447 13:23:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:56.348 13:23:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:56.349 Validate MD5 checksum, iteration 2 00:27:56.349 13:23:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ae295715b9c27aa2f850574d2d7d3d1d 00:27:56.349 13:23:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ae295715b9c27aa2f850574d2d7d3d1d != \a\e\2\9\5\7\1\5\b\9\c\2\7\a\a\2\f\8\5\0\5\7\4\d\2\d\7\d\3\d\1\d ]] 00:27:56.349 13:23:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:56.349 13:23:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:56.349 13:23:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:56.349 13:23:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:56.349 13:23:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:56.349 13:23:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:56.349 13:23:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:56.349 13:23:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:56.349 13:23:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:56.607 [2024-07-25 13:23:48.544675] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:56.607 [2024-07-25 13:23:48.544898] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85227 ] 00:27:56.607 [2024-07-25 13:23:48.722138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.864 [2024-07-25 13:23:48.941298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:02.994  Copying: 431/1024 [MB] (431 MBps) Copying: 903/1024 [MB] (472 MBps) Copying: 1024/1024 [MB] (average 447 MBps) 00:28:02.994 00:28:02.994 13:23:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:02.994 13:23:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:04.905 13:23:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:04.905 13:23:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=01b75fcb8f397cf237b4ea05b30f475a 00:28:04.905 13:23:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 01b75fcb8f397cf237b4ea05b30f475a != \0\1\b\7\5\f\c\b\8\f\3\9\7\c\f\2\3\7\b\4\e\a\0\5\b\3\0\f\4\7\5\a ]] 00:28:04.905 13:23:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:04.905 13:23:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:04.905 13:23:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:04.905 13:23:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:04.905 13:23:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:04.905 13:23:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:04.905 13:23:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:04.905 13:23:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:04.905 13:23:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:04.905 13:23:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:04.905 13:23:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 85118 ]] 00:28:04.905 13:23:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 85118 00:28:04.905 13:23:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 85118 ']' 00:28:04.905 13:23:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 85118 00:28:04.905 13:23:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:28:04.905 13:23:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:04.905 13:23:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85118 00:28:04.905 killing process with pid 85118 00:28:04.905 13:23:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:04.905 13:23:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:04.905 13:23:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85118' 00:28:04.905 13:23:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 85118 00:28:04.905 13:23:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 85118 00:28:05.841 [2024-07-25 13:23:57.852397] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:05.841 [2024-07-25 13:23:57.870558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.841 [2024-07-25 13:23:57.870616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:05.841 [2024-07-25 13:23:57.870645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:05.841 [2024-07-25 13:23:57.870665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.841 [2024-07-25 13:23:57.870707] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:05.841 [2024-07-25 13:23:57.874073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.841 [2024-07-25 13:23:57.874121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:05.841 [2024-07-25 13:23:57.874138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.344 ms 00:28:05.841 [2024-07-25 13:23:57.874151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.841 [2024-07-25 13:23:57.874385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.841 [2024-07-25 13:23:57.874410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:05.841 [2024-07-25 13:23:57.874424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.205 ms 00:28:05.841 [2024-07-25 13:23:57.874435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.841 [2024-07-25 13:23:57.875616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.841 [2024-07-25 13:23:57.875661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:05.841 [2024-07-25 13:23:57.875689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.158 ms 00:28:05.841 [2024-07-25 13:23:57.875700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.841 [2024-07-25 13:23:57.877004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.841 [2024-07-25 13:23:57.877040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:05.841 [2024-07-25 13:23:57.877056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.259 ms 00:28:05.841 [2024-07-25 13:23:57.877067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.841 [2024-07-25 13:23:57.889747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.841 [2024-07-25 13:23:57.889803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:05.841 [2024-07-25 13:23:57.889822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.587 ms 00:28:05.841 [2024-07-25 13:23:57.889834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.841 [2024-07-25 13:23:57.896373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.841 [2024-07-25 13:23:57.896420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:05.841 [2024-07-25 13:23:57.896437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.491 ms 00:28:05.841 [2024-07-25 13:23:57.896449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.841 [2024-07-25 13:23:57.896544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.841 [2024-07-25 13:23:57.896563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:05.841 [2024-07-25 13:23:57.896582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:28:05.841 [2024-07-25 13:23:57.896594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.841 [2024-07-25 13:23:57.908747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.841 [2024-07-25 13:23:57.908789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:28:05.841 [2024-07-25 13:23:57.908806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.130 ms 00:28:05.841 [2024-07-25 13:23:57.908816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.841 [2024-07-25 13:23:57.921013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.841 [2024-07-25 13:23:57.921054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:28:05.841 [2024-07-25 13:23:57.921070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.154 ms 00:28:05.841 [2024-07-25 13:23:57.921080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.841 [2024-07-25 13:23:57.933140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.841 [2024-07-25 13:23:57.933180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:05.841 [2024-07-25 13:23:57.933197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.008 ms 00:28:05.841 [2024-07-25 13:23:57.933208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.841 [2024-07-25 13:23:57.945368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.841 [2024-07-25 13:23:57.945408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:05.841 [2024-07-25 13:23:57.945423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.081 ms 00:28:05.841 [2024-07-25 13:23:57.945434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.841 [2024-07-25 13:23:57.945476] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:05.841 [2024-07-25 13:23:57.945501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:05.841 [2024-07-25 13:23:57.945516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:05.841 [2024-07-25 13:23:57.945528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:05.841 [2024-07-25 13:23:57.945540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:05.841 [2024-07-25 13:23:57.945551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:05.841 [2024-07-25 13:23:57.945563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:05.841 [2024-07-25 13:23:57.945574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:05.841 [2024-07-25 13:23:57.945585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:05.841 [2024-07-25 13:23:57.945597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:05.841 [2024-07-25 13:23:57.945608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:05.841 [2024-07-25 13:23:57.945620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:05.841 [2024-07-25 13:23:57.945632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:05.841 [2024-07-25 13:23:57.945643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:05.841 [2024-07-25 13:23:57.945654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:05.841 [2024-07-25 13:23:57.945666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:05.841 [2024-07-25 13:23:57.945677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:05.841 [2024-07-25 13:23:57.945689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:05.841 [2024-07-25 13:23:57.945718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:05.841 [2024-07-25 13:23:57.945732] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:05.841 [2024-07-25 13:23:57.945744] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: b0d8f7f3-bdb3-4318-bcb9-6fd0008eaba3 00:28:05.841 [2024-07-25 13:23:57.945756] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:05.841 [2024-07-25 13:23:57.945767] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:05.841 [2024-07-25 13:23:57.945777] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:05.841 [2024-07-25 13:23:57.945788] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:05.841 [2024-07-25 13:23:57.945798] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:05.841 [2024-07-25 13:23:57.945816] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:05.841 [2024-07-25 13:23:57.945827] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:05.841 [2024-07-25 13:23:57.945837] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:05.841 [2024-07-25 13:23:57.945848] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:05.841 [2024-07-25 13:23:57.945866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.841 [2024-07-25 13:23:57.945878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:05.842 [2024-07-25 13:23:57.945890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.392 ms 00:28:05.842 [2024-07-25 13:23:57.945901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.842 [2024-07-25 13:23:57.962381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.842 [2024-07-25 13:23:57.962426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:05.842 [2024-07-25 13:23:57.962443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.454 ms 00:28:05.842 [2024-07-25 13:23:57.962463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.842 [2024-07-25 13:23:57.962898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.842 [2024-07-25 13:23:57.962921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:05.842 [2024-07-25 13:23:57.962935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.402 ms 00:28:05.842 [2024-07-25 13:23:57.962945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.842 [2024-07-25 13:23:58.014698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:05.842 [2024-07-25 13:23:58.014766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:05.842 [2024-07-25 13:23:58.014791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:05.842 [2024-07-25 13:23:58.014804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.842 [2024-07-25 13:23:58.014867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:05.842 [2024-07-25 13:23:58.014882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:05.842 [2024-07-25 13:23:58.014894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:05.842 [2024-07-25 13:23:58.014904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.842 [2024-07-25 13:23:58.015019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:05.842 [2024-07-25 13:23:58.015039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:05.842 [2024-07-25 13:23:58.015051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:05.842 [2024-07-25 13:23:58.015069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.842 [2024-07-25 13:23:58.015093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:05.842 [2024-07-25 13:23:58.015123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:05.842 [2024-07-25 13:23:58.015137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:05.842 [2024-07-25 13:23:58.015148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.101 [2024-07-25 13:23:58.113855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:06.101 [2024-07-25 13:23:58.113924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:06.101 [2024-07-25 13:23:58.113953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:06.101 [2024-07-25 13:23:58.113965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.101 [2024-07-25 13:23:58.198476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:06.101 [2024-07-25 13:23:58.198549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:06.101 [2024-07-25 13:23:58.198570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:06.101 [2024-07-25 13:23:58.198582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.101 [2024-07-25 13:23:58.198713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:06.101 [2024-07-25 13:23:58.198732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:06.101 [2024-07-25 13:23:58.198745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:06.101 [2024-07-25 13:23:58.198756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.101 [2024-07-25 13:23:58.198826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:06.101 [2024-07-25 13:23:58.198842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:06.101 [2024-07-25 13:23:58.198855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:06.101 [2024-07-25 13:23:58.198866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.101 [2024-07-25 13:23:58.198984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:06.101 [2024-07-25 13:23:58.199013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:06.101 [2024-07-25 13:23:58.199027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:06.101 [2024-07-25 13:23:58.199038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.101 [2024-07-25 13:23:58.199094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:06.101 [2024-07-25 13:23:58.199143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:06.101 [2024-07-25 13:23:58.199156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:06.101 [2024-07-25 13:23:58.199167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.101 [2024-07-25 13:23:58.199213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:06.101 [2024-07-25 13:23:58.199236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:06.101 [2024-07-25 13:23:58.199247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:06.101 [2024-07-25 13:23:58.199258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.101 [2024-07-25 13:23:58.199315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:06.101 [2024-07-25 13:23:58.199338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:06.101 [2024-07-25 13:23:58.199350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:06.101 [2024-07-25 13:23:58.199361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.101 [2024-07-25 13:23:58.199503] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 328.912 ms, result 0 00:28:07.477 13:23:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:07.477 13:23:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:07.477 13:23:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:07.477 13:23:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:07.477 13:23:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:07.477 13:23:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:07.477 13:23:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:07.477 Remove shared memory files 00:28:07.477 13:23:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:07.477 13:23:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:07.477 13:23:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:07.477 13:23:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84899 00:28:07.477 13:23:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:07.477 13:23:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:07.477 00:28:07.477 real 1m34.509s 00:28:07.477 user 2m16.949s 00:28:07.477 sys 0m22.610s 00:28:07.477 13:23:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:07.477 ************************************ 00:28:07.477 13:23:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:07.477 END TEST ftl_upgrade_shutdown 00:28:07.477 ************************************ 00:28:07.477 13:23:59 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:28:07.477 13:23:59 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:28:07.477 13:23:59 ftl -- ftl/ftl.sh@14 -- # killprocess 77773 00:28:07.477 13:23:59 ftl -- common/autotest_common.sh@950 -- # '[' -z 77773 ']' 00:28:07.477 13:23:59 ftl -- common/autotest_common.sh@954 -- # kill -0 77773 00:28:07.477 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (77773) - No such process 00:28:07.477 Process with pid 77773 is not found 00:28:07.477 13:23:59 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 77773 is not found' 00:28:07.477 13:23:59 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:28:07.477 13:23:59 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:07.477 13:23:59 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=85369 00:28:07.477 13:23:59 ftl -- ftl/ftl.sh@20 -- # waitforlisten 85369 00:28:07.477 13:23:59 ftl -- common/autotest_common.sh@831 -- # '[' -z 85369 ']' 00:28:07.477 13:23:59 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:07.477 13:23:59 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:07.477 13:23:59 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:07.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:07.477 13:23:59 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:07.477 13:23:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:07.477 [2024-07-25 13:23:59.490076] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:07.477 [2024-07-25 13:23:59.490247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85369 ] 00:28:07.477 [2024-07-25 13:23:59.656992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.736 [2024-07-25 13:23:59.888542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:08.692 13:24:00 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:08.692 13:24:00 ftl -- common/autotest_common.sh@864 -- # return 0 00:28:08.692 13:24:00 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:08.950 nvme0n1 00:28:08.950 13:24:00 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:28:08.950 13:24:00 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:08.950 13:24:00 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:09.207 13:24:01 ftl -- ftl/common.sh@28 -- # stores=f756fef7-44e3-4ea7-b31c-ba882962e7c9 00:28:09.207 13:24:01 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:28:09.207 13:24:01 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f756fef7-44e3-4ea7-b31c-ba882962e7c9 00:28:09.465 13:24:01 ftl -- ftl/ftl.sh@23 -- # killprocess 85369 00:28:09.465 13:24:01 ftl -- common/autotest_common.sh@950 -- # '[' -z 85369 ']' 00:28:09.465 13:24:01 ftl -- common/autotest_common.sh@954 -- # kill -0 85369 00:28:09.465 13:24:01 ftl -- common/autotest_common.sh@955 -- # uname 00:28:09.465 13:24:01 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:09.465 13:24:01 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85369 00:28:09.465 13:24:01 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:09.465 killing process with pid 85369 00:28:09.465 13:24:01 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:09.465 13:24:01 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85369' 00:28:09.465 13:24:01 ftl -- common/autotest_common.sh@969 -- # kill 85369 00:28:09.465 13:24:01 ftl -- common/autotest_common.sh@974 -- # wait 85369 00:28:11.994 13:24:03 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:11.994 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:11.994 Waiting for block devices as requested 00:28:11.994 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:11.994 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:11.994 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:28:12.252 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:28:17.519 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:28:17.520 13:24:09 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:28:17.520 Remove shared memory files 00:28:17.520 13:24:09 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:17.520 13:24:09 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:28:17.520 13:24:09 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:28:17.520 13:24:09 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:28:17.520 13:24:09 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:17.520 13:24:09 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:28:17.520 00:28:17.520 real 11m28.779s 00:28:17.520 user 14m32.679s 00:28:17.520 sys 1m29.576s 00:28:17.520 13:24:09 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:17.520 13:24:09 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:17.520 ************************************ 00:28:17.520 END TEST ftl 00:28:17.520 ************************************ 00:28:17.520 13:24:09 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:28:17.520 13:24:09 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:28:17.520 13:24:09 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:28:17.520 13:24:09 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:28:17.520 13:24:09 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:28:17.520 13:24:09 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:28:17.520 13:24:09 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:28:17.520 13:24:09 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:28:17.520 13:24:09 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:28:17.520 13:24:09 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:28:17.520 13:24:09 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:17.520 13:24:09 -- common/autotest_common.sh@10 -- # set +x 00:28:17.520 13:24:09 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:28:17.520 13:24:09 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:28:17.520 13:24:09 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:28:17.520 13:24:09 -- common/autotest_common.sh@10 -- # set +x 00:28:18.453 INFO: APP EXITING 00:28:18.453 INFO: killing all VMs 00:28:18.453 INFO: killing vhost app 00:28:18.453 INFO: EXIT DONE 00:28:18.712 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:19.279 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:28:19.279 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:28:19.279 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:28:19.279 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:28:19.538 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:20.105 Cleaning 00:28:20.105 Removing: /var/run/dpdk/spdk0/config 00:28:20.105 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:20.105 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:20.105 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:20.105 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:20.105 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:20.105 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:20.105 Removing: /var/run/dpdk/spdk0 00:28:20.105 Removing: /var/run/dpdk/spdk_pid61848 00:28:20.105 Removing: /var/run/dpdk/spdk_pid62048 00:28:20.105 Removing: /var/run/dpdk/spdk_pid62263 00:28:20.105 Removing: /var/run/dpdk/spdk_pid62362 00:28:20.105 Removing: /var/run/dpdk/spdk_pid62407 00:28:20.105 Removing: /var/run/dpdk/spdk_pid62535 00:28:20.105 Removing: /var/run/dpdk/spdk_pid62554 00:28:20.105 Removing: /var/run/dpdk/spdk_pid62739 00:28:20.105 Removing: /var/run/dpdk/spdk_pid62825 00:28:20.105 Removing: /var/run/dpdk/spdk_pid62924 00:28:20.105 Removing: /var/run/dpdk/spdk_pid63027 00:28:20.105 Removing: /var/run/dpdk/spdk_pid63127 00:28:20.105 Removing: /var/run/dpdk/spdk_pid63167 00:28:20.105 Removing: /var/run/dpdk/spdk_pid63203 00:28:20.105 Removing: /var/run/dpdk/spdk_pid63271 00:28:20.105 Removing: /var/run/dpdk/spdk_pid63377 00:28:20.105 Removing: /var/run/dpdk/spdk_pid63837 00:28:20.105 Removing: /var/run/dpdk/spdk_pid63907 00:28:20.105 Removing: /var/run/dpdk/spdk_pid63976 00:28:20.105 Removing: /var/run/dpdk/spdk_pid63992 00:28:20.105 Removing: /var/run/dpdk/spdk_pid64140 00:28:20.105 Removing: /var/run/dpdk/spdk_pid64156 00:28:20.105 Removing: /var/run/dpdk/spdk_pid64310 00:28:20.105 Removing: /var/run/dpdk/spdk_pid64327 00:28:20.105 Removing: /var/run/dpdk/spdk_pid64391 00:28:20.105 Removing: /var/run/dpdk/spdk_pid64419 00:28:20.105 Removing: /var/run/dpdk/spdk_pid64479 00:28:20.105 Removing: /var/run/dpdk/spdk_pid64502 00:28:20.105 Removing: /var/run/dpdk/spdk_pid64676 00:28:20.105 Removing: /var/run/dpdk/spdk_pid64713 00:28:20.105 Removing: /var/run/dpdk/spdk_pid64794 00:28:20.105 Removing: /var/run/dpdk/spdk_pid64961 00:28:20.105 Removing: /var/run/dpdk/spdk_pid65056 00:28:20.105 Removing: /var/run/dpdk/spdk_pid65098 00:28:20.105 Removing: /var/run/dpdk/spdk_pid65576 00:28:20.105 Removing: /var/run/dpdk/spdk_pid65679 00:28:20.105 Removing: /var/run/dpdk/spdk_pid65794 00:28:20.105 Removing: /var/run/dpdk/spdk_pid65853 00:28:20.105 Removing: /var/run/dpdk/spdk_pid65884 00:28:20.105 Removing: /var/run/dpdk/spdk_pid65960 00:28:20.105 Removing: /var/run/dpdk/spdk_pid66597 00:28:20.105 Removing: /var/run/dpdk/spdk_pid66639 00:28:20.105 Removing: /var/run/dpdk/spdk_pid67169 00:28:20.105 Removing: /var/run/dpdk/spdk_pid67273 00:28:20.105 Removing: /var/run/dpdk/spdk_pid67388 00:28:20.105 Removing: /var/run/dpdk/spdk_pid67447 00:28:20.105 Removing: /var/run/dpdk/spdk_pid67472 00:28:20.105 Removing: /var/run/dpdk/spdk_pid67503 00:28:20.105 Removing: /var/run/dpdk/spdk_pid69361 00:28:20.105 Removing: /var/run/dpdk/spdk_pid69509 00:28:20.105 Removing: /var/run/dpdk/spdk_pid69513 00:28:20.105 Removing: /var/run/dpdk/spdk_pid69525 00:28:20.105 Removing: /var/run/dpdk/spdk_pid69572 00:28:20.105 Removing: /var/run/dpdk/spdk_pid69576 00:28:20.105 Removing: /var/run/dpdk/spdk_pid69588 00:28:20.105 Removing: /var/run/dpdk/spdk_pid69633 00:28:20.105 Removing: /var/run/dpdk/spdk_pid69637 00:28:20.105 Removing: /var/run/dpdk/spdk_pid69649 00:28:20.105 Removing: /var/run/dpdk/spdk_pid69694 00:28:20.105 Removing: /var/run/dpdk/spdk_pid69698 00:28:20.105 Removing: /var/run/dpdk/spdk_pid69710 00:28:20.105 Removing: /var/run/dpdk/spdk_pid71067 00:28:20.105 Removing: /var/run/dpdk/spdk_pid71162 00:28:20.105 Removing: /var/run/dpdk/spdk_pid72566 00:28:20.105 Removing: /var/run/dpdk/spdk_pid73916 00:28:20.105 Removing: /var/run/dpdk/spdk_pid74049 00:28:20.105 Removing: /var/run/dpdk/spdk_pid74175 00:28:20.105 Removing: /var/run/dpdk/spdk_pid74297 00:28:20.105 Removing: /var/run/dpdk/spdk_pid74440 00:28:20.105 Removing: /var/run/dpdk/spdk_pid74520 00:28:20.105 Removing: /var/run/dpdk/spdk_pid74660 00:28:20.105 Removing: /var/run/dpdk/spdk_pid75024 00:28:20.105 Removing: /var/run/dpdk/spdk_pid75075 00:28:20.105 Removing: /var/run/dpdk/spdk_pid75548 00:28:20.105 Removing: /var/run/dpdk/spdk_pid75733 00:28:20.105 Removing: /var/run/dpdk/spdk_pid75837 00:28:20.105 Removing: /var/run/dpdk/spdk_pid75949 00:28:20.105 Removing: /var/run/dpdk/spdk_pid76007 00:28:20.105 Removing: /var/run/dpdk/spdk_pid76034 00:28:20.105 Removing: /var/run/dpdk/spdk_pid76323 00:28:20.105 Removing: /var/run/dpdk/spdk_pid76378 00:28:20.105 Removing: /var/run/dpdk/spdk_pid76456 00:28:20.105 Removing: /var/run/dpdk/spdk_pid76843 00:28:20.106 Removing: /var/run/dpdk/spdk_pid76988 00:28:20.106 Removing: /var/run/dpdk/spdk_pid77773 00:28:20.106 Removing: /var/run/dpdk/spdk_pid77915 00:28:20.106 Removing: /var/run/dpdk/spdk_pid78116 00:28:20.106 Removing: /var/run/dpdk/spdk_pid78213 00:28:20.106 Removing: /var/run/dpdk/spdk_pid78590 00:28:20.106 Removing: /var/run/dpdk/spdk_pid78865 00:28:20.106 Removing: /var/run/dpdk/spdk_pid79224 00:28:20.106 Removing: /var/run/dpdk/spdk_pid79420 00:28:20.106 Removing: /var/run/dpdk/spdk_pid79545 00:28:20.106 Removing: /var/run/dpdk/spdk_pid79609 00:28:20.106 Removing: /var/run/dpdk/spdk_pid79747 00:28:20.106 Removing: /var/run/dpdk/spdk_pid79783 00:28:20.106 Removing: /var/run/dpdk/spdk_pid79847 00:28:20.364 Removing: /var/run/dpdk/spdk_pid80038 00:28:20.364 Removing: /var/run/dpdk/spdk_pid80293 00:28:20.364 Removing: /var/run/dpdk/spdk_pid80696 00:28:20.364 Removing: /var/run/dpdk/spdk_pid81149 00:28:20.364 Removing: /var/run/dpdk/spdk_pid81553 00:28:20.364 Removing: /var/run/dpdk/spdk_pid82045 00:28:20.364 Removing: /var/run/dpdk/spdk_pid82182 00:28:20.364 Removing: /var/run/dpdk/spdk_pid82287 00:28:20.364 Removing: /var/run/dpdk/spdk_pid82908 00:28:20.364 Removing: /var/run/dpdk/spdk_pid82989 00:28:20.364 Removing: /var/run/dpdk/spdk_pid83409 00:28:20.364 Removing: /var/run/dpdk/spdk_pid83815 00:28:20.364 Removing: /var/run/dpdk/spdk_pid84303 00:28:20.364 Removing: /var/run/dpdk/spdk_pid84420 00:28:20.364 Removing: /var/run/dpdk/spdk_pid84475 00:28:20.364 Removing: /var/run/dpdk/spdk_pid84546 00:28:20.364 Removing: /var/run/dpdk/spdk_pid84612 00:28:20.364 Removing: /var/run/dpdk/spdk_pid84682 00:28:20.364 Removing: /var/run/dpdk/spdk_pid84899 00:28:20.364 Removing: /var/run/dpdk/spdk_pid84966 00:28:20.364 Removing: /var/run/dpdk/spdk_pid85039 00:28:20.364 Removing: /var/run/dpdk/spdk_pid85118 00:28:20.364 Removing: /var/run/dpdk/spdk_pid85154 00:28:20.364 Removing: /var/run/dpdk/spdk_pid85227 00:28:20.364 Removing: /var/run/dpdk/spdk_pid85369 00:28:20.364 Clean 00:28:20.364 13:24:12 -- common/autotest_common.sh@1451 -- # return 0 00:28:20.364 13:24:12 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:28:20.364 13:24:12 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:20.364 13:24:12 -- common/autotest_common.sh@10 -- # set +x 00:28:20.364 13:24:12 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:28:20.364 13:24:12 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:20.364 13:24:12 -- common/autotest_common.sh@10 -- # set +x 00:28:20.364 13:24:12 -- spdk/autotest.sh@391 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:20.364 13:24:12 -- spdk/autotest.sh@393 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:28:20.364 13:24:12 -- spdk/autotest.sh@393 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:28:20.364 13:24:12 -- spdk/autotest.sh@395 -- # hash lcov 00:28:20.364 13:24:12 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:28:20.364 13:24:12 -- spdk/autotest.sh@397 -- # hostname 00:28:20.364 13:24:12 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:28:20.622 geninfo: WARNING: invalid characters removed from testname! 00:28:52.690 13:24:40 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:52.690 13:24:44 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:55.221 13:24:47 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:58.510 13:24:50 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:01.046 13:24:53 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:04.342 13:24:55 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:06.873 13:24:58 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:06.873 13:24:58 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:06.873 13:24:58 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:06.873 13:24:58 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:06.873 13:24:58 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:06.873 13:24:58 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.873 13:24:58 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.873 13:24:58 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.873 13:24:58 -- paths/export.sh@5 -- $ export PATH 00:29:06.873 13:24:58 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.873 13:24:58 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:29:06.873 13:24:58 -- common/autobuild_common.sh@447 -- $ date +%s 00:29:06.873 13:24:58 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721913898.XXXXXX 00:29:06.873 13:24:58 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721913898.9AGdje 00:29:06.873 13:24:58 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:29:06.873 13:24:58 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:29:06.873 13:24:58 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:29:06.873 13:24:58 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:29:06.873 13:24:58 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:29:06.873 13:24:58 -- common/autobuild_common.sh@463 -- $ get_config_params 00:29:06.873 13:24:58 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:29:06.873 13:24:58 -- common/autotest_common.sh@10 -- $ set +x 00:29:06.873 13:24:58 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:29:06.873 13:24:58 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:29:06.873 13:24:58 -- pm/common@17 -- $ local monitor 00:29:06.873 13:24:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:06.873 13:24:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:06.873 13:24:58 -- pm/common@25 -- $ sleep 1 00:29:06.873 13:24:58 -- pm/common@21 -- $ date +%s 00:29:06.873 13:24:58 -- pm/common@21 -- $ date +%s 00:29:06.873 13:24:58 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721913898 00:29:06.873 13:24:58 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721913898 00:29:06.873 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721913898_collect-vmstat.pm.log 00:29:06.873 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721913898_collect-cpu-load.pm.log 00:29:07.809 13:24:59 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:29:07.809 13:24:59 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:29:07.810 13:24:59 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:29:07.810 13:24:59 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:29:07.810 13:24:59 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:29:07.810 13:24:59 -- spdk/autopackage.sh@19 -- $ timing_finish 00:29:07.810 13:24:59 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:07.810 13:24:59 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:29:07.810 13:24:59 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:07.810 13:24:59 -- spdk/autopackage.sh@20 -- $ exit 0 00:29:07.810 13:24:59 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:29:07.810 13:24:59 -- pm/common@29 -- $ signal_monitor_resources TERM 00:29:07.810 13:24:59 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:29:07.810 13:24:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:07.810 13:24:59 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:29:07.810 13:24:59 -- pm/common@44 -- $ pid=87042 00:29:07.810 13:24:59 -- pm/common@50 -- $ kill -TERM 87042 00:29:07.810 13:24:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:07.810 13:24:59 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:29:07.810 13:24:59 -- pm/common@44 -- $ pid=87044 00:29:07.810 13:24:59 -- pm/common@50 -- $ kill -TERM 87044 00:29:07.810 + [[ -n 5192 ]] 00:29:07.810 + sudo kill 5192 00:29:07.819 [Pipeline] } 00:29:07.838 [Pipeline] // timeout 00:29:07.843 [Pipeline] } 00:29:07.862 [Pipeline] // stage 00:29:07.867 [Pipeline] } 00:29:07.885 [Pipeline] // catchError 00:29:07.895 [Pipeline] stage 00:29:07.897 [Pipeline] { (Stop VM) 00:29:07.912 [Pipeline] sh 00:29:08.192 + vagrant halt 00:29:12.376 ==> default: Halting domain... 00:29:18.948 [Pipeline] sh 00:29:19.227 + vagrant destroy -f 00:29:23.423 ==> default: Removing domain... 00:29:23.693 [Pipeline] sh 00:29:23.972 + mv output /var/jenkins/workspace/nvme-vg-autotest_2/output 00:29:23.982 [Pipeline] } 00:29:24.001 [Pipeline] // stage 00:29:24.008 [Pipeline] } 00:29:24.025 [Pipeline] // dir 00:29:24.031 [Pipeline] } 00:29:24.049 [Pipeline] // wrap 00:29:24.055 [Pipeline] } 00:29:24.070 [Pipeline] // catchError 00:29:24.080 [Pipeline] stage 00:29:24.083 [Pipeline] { (Epilogue) 00:29:24.097 [Pipeline] sh 00:29:24.380 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:31.047 [Pipeline] catchError 00:29:31.049 [Pipeline] { 00:29:31.063 [Pipeline] sh 00:29:31.342 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:31.599 Artifacts sizes are good 00:29:31.606 [Pipeline] } 00:29:31.622 [Pipeline] // catchError 00:29:31.632 [Pipeline] archiveArtifacts 00:29:31.638 Archiving artifacts 00:29:31.814 [Pipeline] cleanWs 00:29:31.824 [WS-CLEANUP] Deleting project workspace... 00:29:31.824 [WS-CLEANUP] Deferred wipeout is used... 00:29:31.830 [WS-CLEANUP] done 00:29:31.831 [Pipeline] } 00:29:31.848 [Pipeline] // stage 00:29:31.853 [Pipeline] } 00:29:31.869 [Pipeline] // node 00:29:31.875 [Pipeline] End of Pipeline 00:29:31.932 Finished: SUCCESS